Test Report: KVM_Linux_crio 20623

                    
                      9a147d453238f682b8f0e5ed98059c714226a9c8:2025-04-14:39135
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (158.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-102056 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-102056 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-102056 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0e50726f-9091-4bfc-8024-50db9a3d55cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0e50726f-9091-4bfc-8024-50db9a3d55cf] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.005125063s
I0414 12:58:09.523273 2190400 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102056 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.565459298s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-102056 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.15
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-102056 -n addons-102056
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 logs -n 25: (1.273651541s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-897367                                                                     | download-only-897367 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| delete  | -p download-only-101341                                                                     | download-only-101341 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| delete  | -p download-only-897367                                                                     | download-only-897367 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-403481 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |                     |
	|         | binary-mirror-403481                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40467                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-403481                                                                     | binary-mirror-403481 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| addons  | enable dashboard -p                                                                         | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |                     |
	|         | addons-102056                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |                     |
	|         | addons-102056                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-102056 --wait=true                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:57 UTC | 14 Apr 25 12:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:57 UTC | 14 Apr 25 12:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:57 UTC | 14 Apr 25 12:57 UTC |
	|         | -p addons-102056                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:57 UTC | 14 Apr 25 12:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:57 UTC | 14 Apr 25 12:57 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-102056 ssh curl -s                                                                   | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-102056 ip                                                                            | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-102056 ssh cat                                                                       | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | /opt/local-path-provisioner/pvc-df14fbd3-4cbb-489d-82fc-3b8f87697b3c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-102056 addons disable                                                                | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:58 UTC | 14 Apr 25 12:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:59 UTC | 14 Apr 25 12:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-102056 addons                                                                        | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 12:59 UTC | 14 Apr 25 12:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-102056 ip                                                                            | addons-102056        | jenkins | v1.35.0 | 14 Apr 25 13:00 UTC | 14 Apr 25 13:00 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:53:59
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:53:59.519211 2191137 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:53:59.519329 2191137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:59.519337 2191137 out.go:358] Setting ErrFile to fd 2...
	I0414 12:53:59.519348 2191137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:59.519528 2191137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 12:53:59.520166 2191137 out.go:352] Setting JSON to false
	I0414 12:53:59.521263 2191137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":164178,"bootTime":1744471061,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:53:59.521379 2191137 start.go:139] virtualization: kvm guest
	I0414 12:53:59.523226 2191137 out.go:177] * [addons-102056] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:53:59.524673 2191137 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 12:53:59.524696 2191137 notify.go:220] Checking for updates...
	I0414 12:53:59.527262 2191137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:53:59.528666 2191137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 12:53:59.530004 2191137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 12:53:59.531346 2191137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:53:59.532578 2191137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:53:59.533914 2191137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:53:59.567076 2191137 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 12:53:59.568264 2191137 start.go:297] selected driver: kvm2
	I0414 12:53:59.568279 2191137 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:53:59.568290 2191137 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:53:59.569028 2191137 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:59.569154 2191137 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:53:59.584648 2191137 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:53:59.584700 2191137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:53:59.584971 2191137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:53:59.585009 2191137 cni.go:84] Creating CNI manager for ""
	I0414 12:53:59.585057 2191137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:53:59.585067 2191137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:53:59.585129 2191137 start.go:340] cluster config:
	{Name:addons-102056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-102056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:53:59.585239 2191137 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:59.586838 2191137 out.go:177] * Starting "addons-102056" primary control-plane node in "addons-102056" cluster
	I0414 12:53:59.587917 2191137 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:53:59.587967 2191137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:53:59.587980 2191137 cache.go:56] Caching tarball of preloaded images
	I0414 12:53:59.588081 2191137 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:53:59.588093 2191137 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:53:59.588532 2191137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/config.json ...
	I0414 12:53:59.588566 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/config.json: {Name:mk06fe1f168ace26a4787e73e66020bad1f61b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:53:59.588763 2191137 start.go:360] acquireMachinesLock for addons-102056: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:53:59.588831 2191137 start.go:364] duration metric: took 47.312µs to acquireMachinesLock for "addons-102056"
	I0414 12:53:59.588850 2191137 start.go:93] Provisioning new machine with config: &{Name:addons-102056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-102056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:53:59.588928 2191137 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 12:53:59.590307 2191137 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0414 12:53:59.590457 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:53:59.590506 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:53:59.605262 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0414 12:53:59.605838 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:53:59.606497 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:53:59.606519 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:53:59.606879 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:53:59.607064 2191137 main.go:141] libmachine: (addons-102056) Calling .GetMachineName
	I0414 12:53:59.607185 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:53:59.607312 2191137 start.go:159] libmachine.API.Create for "addons-102056" (driver="kvm2")
	I0414 12:53:59.607338 2191137 client.go:168] LocalClient.Create starting
	I0414 12:53:59.607386 2191137 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 12:53:59.744750 2191137 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 12:54:00.256413 2191137 main.go:141] libmachine: Running pre-create checks...
	I0414 12:54:00.256438 2191137 main.go:141] libmachine: (addons-102056) Calling .PreCreateCheck
	I0414 12:54:00.256963 2191137 main.go:141] libmachine: (addons-102056) Calling .GetConfigRaw
	I0414 12:54:00.257450 2191137 main.go:141] libmachine: Creating machine...
	I0414 12:54:00.257467 2191137 main.go:141] libmachine: (addons-102056) Calling .Create
	I0414 12:54:00.257689 2191137 main.go:141] libmachine: (addons-102056) creating KVM machine...
	I0414 12:54:00.257708 2191137 main.go:141] libmachine: (addons-102056) creating network...
	I0414 12:54:00.259125 2191137 main.go:141] libmachine: (addons-102056) DBG | found existing default KVM network
	I0414 12:54:00.260028 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:00.259831 2191159 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00021edd0}
	I0414 12:54:00.260083 2191137 main.go:141] libmachine: (addons-102056) DBG | created network xml: 
	I0414 12:54:00.260100 2191137 main.go:141] libmachine: (addons-102056) DBG | <network>
	I0414 12:54:00.260110 2191137 main.go:141] libmachine: (addons-102056) DBG |   <name>mk-addons-102056</name>
	I0414 12:54:00.260121 2191137 main.go:141] libmachine: (addons-102056) DBG |   <dns enable='no'/>
	I0414 12:54:00.260132 2191137 main.go:141] libmachine: (addons-102056) DBG |   
	I0414 12:54:00.260140 2191137 main.go:141] libmachine: (addons-102056) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 12:54:00.260148 2191137 main.go:141] libmachine: (addons-102056) DBG |     <dhcp>
	I0414 12:54:00.260159 2191137 main.go:141] libmachine: (addons-102056) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 12:54:00.260186 2191137 main.go:141] libmachine: (addons-102056) DBG |     </dhcp>
	I0414 12:54:00.260200 2191137 main.go:141] libmachine: (addons-102056) DBG |   </ip>
	I0414 12:54:00.260211 2191137 main.go:141] libmachine: (addons-102056) DBG |   
	I0414 12:54:00.260220 2191137 main.go:141] libmachine: (addons-102056) DBG | </network>
	I0414 12:54:00.260228 2191137 main.go:141] libmachine: (addons-102056) DBG | 
	I0414 12:54:00.265552 2191137 main.go:141] libmachine: (addons-102056) DBG | trying to create private KVM network mk-addons-102056 192.168.39.0/24...
	I0414 12:54:00.338272 2191137 main.go:141] libmachine: (addons-102056) DBG | private KVM network mk-addons-102056 192.168.39.0/24 created
	I0414 12:54:00.338320 2191137 main.go:141] libmachine: (addons-102056) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056 ...
	I0414 12:54:00.338332 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:00.338264 2191159 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 12:54:00.338350 2191137 main.go:141] libmachine: (addons-102056) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:54:00.338439 2191137 main.go:141] libmachine: (addons-102056) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 12:54:00.648380 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:00.648247 2191159 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa...
	I0414 12:54:00.900166 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:00.900018 2191159 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/addons-102056.rawdisk...
	I0414 12:54:00.900197 2191137 main.go:141] libmachine: (addons-102056) DBG | Writing magic tar header
	I0414 12:54:00.900206 2191137 main.go:141] libmachine: (addons-102056) DBG | Writing SSH key tar header
	I0414 12:54:00.900214 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:00.900137 2191159 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056 ...
	I0414 12:54:00.900224 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056
	I0414 12:54:00.900304 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056 (perms=drwx------)
	I0414 12:54:00.900326 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 12:54:00.900336 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 12:54:00.900351 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 12:54:00.900370 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 12:54:00.900379 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 12:54:00.900393 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 12:54:00.900406 2191137 main.go:141] libmachine: (addons-102056) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 12:54:00.900418 2191137 main.go:141] libmachine: (addons-102056) creating domain...
	I0414 12:54:00.900433 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 12:54:00.900448 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 12:54:00.900464 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home/jenkins
	I0414 12:54:00.900473 2191137 main.go:141] libmachine: (addons-102056) DBG | checking permissions on dir: /home
	I0414 12:54:00.900484 2191137 main.go:141] libmachine: (addons-102056) DBG | skipping /home - not owner
	I0414 12:54:00.901586 2191137 main.go:141] libmachine: (addons-102056) define libvirt domain using xml: 
	I0414 12:54:00.901606 2191137 main.go:141] libmachine: (addons-102056) <domain type='kvm'>
	I0414 12:54:00.901614 2191137 main.go:141] libmachine: (addons-102056)   <name>addons-102056</name>
	I0414 12:54:00.901625 2191137 main.go:141] libmachine: (addons-102056)   <memory unit='MiB'>4000</memory>
	I0414 12:54:00.901633 2191137 main.go:141] libmachine: (addons-102056)   <vcpu>2</vcpu>
	I0414 12:54:00.901639 2191137 main.go:141] libmachine: (addons-102056)   <features>
	I0414 12:54:00.901646 2191137 main.go:141] libmachine: (addons-102056)     <acpi/>
	I0414 12:54:00.901653 2191137 main.go:141] libmachine: (addons-102056)     <apic/>
	I0414 12:54:00.901659 2191137 main.go:141] libmachine: (addons-102056)     <pae/>
	I0414 12:54:00.901665 2191137 main.go:141] libmachine: (addons-102056)     
	I0414 12:54:00.901677 2191137 main.go:141] libmachine: (addons-102056)   </features>
	I0414 12:54:00.901698 2191137 main.go:141] libmachine: (addons-102056)   <cpu mode='host-passthrough'>
	I0414 12:54:00.901734 2191137 main.go:141] libmachine: (addons-102056)   
	I0414 12:54:00.901762 2191137 main.go:141] libmachine: (addons-102056)   </cpu>
	I0414 12:54:00.901791 2191137 main.go:141] libmachine: (addons-102056)   <os>
	I0414 12:54:00.901815 2191137 main.go:141] libmachine: (addons-102056)     <type>hvm</type>
	I0414 12:54:00.901822 2191137 main.go:141] libmachine: (addons-102056)     <boot dev='cdrom'/>
	I0414 12:54:00.901830 2191137 main.go:141] libmachine: (addons-102056)     <boot dev='hd'/>
	I0414 12:54:00.901854 2191137 main.go:141] libmachine: (addons-102056)     <bootmenu enable='no'/>
	I0414 12:54:00.901864 2191137 main.go:141] libmachine: (addons-102056)   </os>
	I0414 12:54:00.901872 2191137 main.go:141] libmachine: (addons-102056)   <devices>
	I0414 12:54:00.901882 2191137 main.go:141] libmachine: (addons-102056)     <disk type='file' device='cdrom'>
	I0414 12:54:00.901895 2191137 main.go:141] libmachine: (addons-102056)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/boot2docker.iso'/>
	I0414 12:54:00.901906 2191137 main.go:141] libmachine: (addons-102056)       <target dev='hdc' bus='scsi'/>
	I0414 12:54:00.901911 2191137 main.go:141] libmachine: (addons-102056)       <readonly/>
	I0414 12:54:00.901921 2191137 main.go:141] libmachine: (addons-102056)     </disk>
	I0414 12:54:00.901934 2191137 main.go:141] libmachine: (addons-102056)     <disk type='file' device='disk'>
	I0414 12:54:00.901946 2191137 main.go:141] libmachine: (addons-102056)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 12:54:00.901961 2191137 main.go:141] libmachine: (addons-102056)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/addons-102056.rawdisk'/>
	I0414 12:54:00.901972 2191137 main.go:141] libmachine: (addons-102056)       <target dev='hda' bus='virtio'/>
	I0414 12:54:00.901979 2191137 main.go:141] libmachine: (addons-102056)     </disk>
	I0414 12:54:00.901988 2191137 main.go:141] libmachine: (addons-102056)     <interface type='network'>
	I0414 12:54:00.902003 2191137 main.go:141] libmachine: (addons-102056)       <source network='mk-addons-102056'/>
	I0414 12:54:00.902021 2191137 main.go:141] libmachine: (addons-102056)       <model type='virtio'/>
	I0414 12:54:00.902033 2191137 main.go:141] libmachine: (addons-102056)     </interface>
	I0414 12:54:00.902044 2191137 main.go:141] libmachine: (addons-102056)     <interface type='network'>
	I0414 12:54:00.902064 2191137 main.go:141] libmachine: (addons-102056)       <source network='default'/>
	I0414 12:54:00.902081 2191137 main.go:141] libmachine: (addons-102056)       <model type='virtio'/>
	I0414 12:54:00.902092 2191137 main.go:141] libmachine: (addons-102056)     </interface>
	I0414 12:54:00.902102 2191137 main.go:141] libmachine: (addons-102056)     <serial type='pty'>
	I0414 12:54:00.902114 2191137 main.go:141] libmachine: (addons-102056)       <target port='0'/>
	I0414 12:54:00.902122 2191137 main.go:141] libmachine: (addons-102056)     </serial>
	I0414 12:54:00.902145 2191137 main.go:141] libmachine: (addons-102056)     <console type='pty'>
	I0414 12:54:00.902169 2191137 main.go:141] libmachine: (addons-102056)       <target type='serial' port='0'/>
	I0414 12:54:00.902180 2191137 main.go:141] libmachine: (addons-102056)     </console>
	I0414 12:54:00.902188 2191137 main.go:141] libmachine: (addons-102056)     <rng model='virtio'>
	I0414 12:54:00.902202 2191137 main.go:141] libmachine: (addons-102056)       <backend model='random'>/dev/random</backend>
	I0414 12:54:00.902212 2191137 main.go:141] libmachine: (addons-102056)     </rng>
	I0414 12:54:00.902233 2191137 main.go:141] libmachine: (addons-102056)     
	I0414 12:54:00.902245 2191137 main.go:141] libmachine: (addons-102056)     
	I0414 12:54:00.902258 2191137 main.go:141] libmachine: (addons-102056)   </devices>
	I0414 12:54:00.902270 2191137 main.go:141] libmachine: (addons-102056) </domain>
	I0414 12:54:00.902281 2191137 main.go:141] libmachine: (addons-102056) 
	I0414 12:54:00.906389 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:b0:c5:f1 in network default
	I0414 12:54:00.907049 2191137 main.go:141] libmachine: (addons-102056) starting domain...
	I0414 12:54:00.907070 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:00.907078 2191137 main.go:141] libmachine: (addons-102056) ensuring networks are active...
	I0414 12:54:00.907824 2191137 main.go:141] libmachine: (addons-102056) Ensuring network default is active
	I0414 12:54:00.908118 2191137 main.go:141] libmachine: (addons-102056) Ensuring network mk-addons-102056 is active
	I0414 12:54:00.908611 2191137 main.go:141] libmachine: (addons-102056) getting domain XML...
	I0414 12:54:00.909488 2191137 main.go:141] libmachine: (addons-102056) creating domain...
	I0414 12:54:02.139399 2191137 main.go:141] libmachine: (addons-102056) waiting for IP...
	I0414 12:54:02.140277 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:02.140790 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:02.140877 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:02.140806 2191159 retry.go:31] will retry after 291.673924ms: waiting for domain to come up
	I0414 12:54:02.434504 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:02.434978 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:02.435009 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:02.434959 2191159 retry.go:31] will retry after 369.018166ms: waiting for domain to come up
	I0414 12:54:02.805673 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:02.806163 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:02.806188 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:02.806109 2191159 retry.go:31] will retry after 316.069935ms: waiting for domain to come up
	I0414 12:54:03.123714 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:03.124299 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:03.124333 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:03.124275 2191159 retry.go:31] will retry after 444.612861ms: waiting for domain to come up
	I0414 12:54:03.571013 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:03.571429 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:03.571463 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:03.571380 2191159 retry.go:31] will retry after 740.450455ms: waiting for domain to come up
	I0414 12:54:04.313086 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:04.313539 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:04.313603 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:04.313510 2191159 retry.go:31] will retry after 737.299657ms: waiting for domain to come up
	I0414 12:54:05.052198 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:05.052642 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:05.052667 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:05.052597 2191159 retry.go:31] will retry after 776.036256ms: waiting for domain to come up
	I0414 12:54:05.829877 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:05.830337 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:05.830367 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:05.830276 2191159 retry.go:31] will retry after 949.566843ms: waiting for domain to come up
	I0414 12:54:06.781439 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:06.781821 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:06.781850 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:06.781800 2191159 retry.go:31] will retry after 1.617109263s: waiting for domain to come up
	I0414 12:54:08.401779 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:08.402314 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:08.402333 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:08.402258 2191159 retry.go:31] will retry after 2.180212819s: waiting for domain to come up
	I0414 12:54:10.584699 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:10.585200 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:10.585228 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:10.585142 2191159 retry.go:31] will retry after 1.800959623s: waiting for domain to come up
	I0414 12:54:12.388212 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:12.388757 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:12.388810 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:12.388691 2191159 retry.go:31] will retry after 3.528677529s: waiting for domain to come up
	I0414 12:54:15.918922 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:15.919424 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:15.919470 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:15.919396 2191159 retry.go:31] will retry after 3.579524315s: waiting for domain to come up
	I0414 12:54:19.503374 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:19.503804 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find current IP address of domain addons-102056 in network mk-addons-102056
	I0414 12:54:19.503832 2191137 main.go:141] libmachine: (addons-102056) DBG | I0414 12:54:19.503772 2191159 retry.go:31] will retry after 3.592188249s: waiting for domain to come up
	I0414 12:54:23.100492 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.101126 2191137 main.go:141] libmachine: (addons-102056) found domain IP: 192.168.39.15
	I0414 12:54:23.101154 2191137 main.go:141] libmachine: (addons-102056) reserving static IP address...
	I0414 12:54:23.101166 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has current primary IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.101548 2191137 main.go:141] libmachine: (addons-102056) DBG | unable to find host DHCP lease matching {name: "addons-102056", mac: "52:54:00:6a:18:7d", ip: "192.168.39.15"} in network mk-addons-102056
	I0414 12:54:23.185677 2191137 main.go:141] libmachine: (addons-102056) reserved static IP address 192.168.39.15 for domain addons-102056
	I0414 12:54:23.185716 2191137 main.go:141] libmachine: (addons-102056) waiting for SSH...
	I0414 12:54:23.185726 2191137 main.go:141] libmachine: (addons-102056) DBG | Getting to WaitForSSH function...
	I0414 12:54:23.188908 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.189399 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.189434 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.189527 2191137 main.go:141] libmachine: (addons-102056) DBG | Using SSH client type: external
	I0414 12:54:23.189548 2191137 main.go:141] libmachine: (addons-102056) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa (-rw-------)
	I0414 12:54:23.189580 2191137 main.go:141] libmachine: (addons-102056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:54:23.189600 2191137 main.go:141] libmachine: (addons-102056) DBG | About to run SSH command:
	I0414 12:54:23.189613 2191137 main.go:141] libmachine: (addons-102056) DBG | exit 0
	I0414 12:54:23.313366 2191137 main.go:141] libmachine: (addons-102056) DBG | SSH cmd err, output: <nil>: 
	I0414 12:54:23.313681 2191137 main.go:141] libmachine: (addons-102056) KVM machine creation complete
	I0414 12:54:23.314025 2191137 main.go:141] libmachine: (addons-102056) Calling .GetConfigRaw
	I0414 12:54:23.314608 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:23.314840 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:23.315056 2191137 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 12:54:23.315080 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:23.316626 2191137 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 12:54:23.316644 2191137 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 12:54:23.316649 2191137 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 12:54:23.316655 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.319311 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.319759 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.319793 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.320008 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:23.320246 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.320416 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.320655 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:23.320874 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:23.321188 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:23.321209 2191137 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 12:54:23.420209 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:54:23.420244 2191137 main.go:141] libmachine: Detecting the provisioner...
	I0414 12:54:23.420258 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.423052 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.423478 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.423510 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.423665 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:23.423906 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.424085 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.424345 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:23.424582 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:23.424826 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:23.424839 2191137 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 12:54:23.525512 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 12:54:23.525599 2191137 main.go:141] libmachine: found compatible host: buildroot
	I0414 12:54:23.525606 2191137 main.go:141] libmachine: Provisioning with buildroot...
	I0414 12:54:23.525615 2191137 main.go:141] libmachine: (addons-102056) Calling .GetMachineName
	I0414 12:54:23.525909 2191137 buildroot.go:166] provisioning hostname "addons-102056"
	I0414 12:54:23.525946 2191137 main.go:141] libmachine: (addons-102056) Calling .GetMachineName
	I0414 12:54:23.526172 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.529890 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.530335 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.530359 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.530591 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:23.530771 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.531013 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.531199 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:23.531454 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:23.531667 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:23.531683 2191137 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-102056 && echo "addons-102056" | sudo tee /etc/hostname
	I0414 12:54:23.643066 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-102056
	
	I0414 12:54:23.643106 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.646183 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.646598 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.646630 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.646871 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:23.647082 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.647287 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.647471 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:23.647631 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:23.647945 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:23.647969 2191137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-102056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-102056/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-102056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:54:23.753990 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:54:23.754033 2191137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 12:54:23.754066 2191137 buildroot.go:174] setting up certificates
	I0414 12:54:23.754084 2191137 provision.go:84] configureAuth start
	I0414 12:54:23.754123 2191137 main.go:141] libmachine: (addons-102056) Calling .GetMachineName
	I0414 12:54:23.754438 2191137 main.go:141] libmachine: (addons-102056) Calling .GetIP
	I0414 12:54:23.757557 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.758092 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.758142 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.758357 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.760651 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.761022 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.761054 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.761197 2191137 provision.go:143] copyHostCerts
	I0414 12:54:23.761274 2191137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 12:54:23.761448 2191137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 12:54:23.761584 2191137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 12:54:23.761659 2191137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.addons-102056 san=[127.0.0.1 192.168.39.15 addons-102056 localhost minikube]
	I0414 12:54:23.934340 2191137 provision.go:177] copyRemoteCerts
	I0414 12:54:23.934421 2191137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:54:23.934458 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:23.937197 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.937529 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:23.937566 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:23.937760 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:23.937977 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:23.938215 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:23.938407 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:24.019209 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 12:54:24.042990 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:54:24.067557 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 12:54:24.091382 2191137 provision.go:87] duration metric: took 337.277826ms to configureAuth
	I0414 12:54:24.091420 2191137 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:54:24.091614 2191137 config.go:182] Loaded profile config "addons-102056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:54:24.091724 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:24.094931 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.095319 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.095355 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.095659 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:24.095917 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.096123 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.096324 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:24.096520 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:24.096717 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:24.096754 2191137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:54:24.311935 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:54:24.311980 2191137 main.go:141] libmachine: Checking connection to Docker...
	I0414 12:54:24.311988 2191137 main.go:141] libmachine: (addons-102056) Calling .GetURL
	I0414 12:54:24.313521 2191137 main.go:141] libmachine: (addons-102056) DBG | using libvirt version 6000000
	I0414 12:54:24.315582 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.315920 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.315954 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.316132 2191137 main.go:141] libmachine: Docker is up and running!
	I0414 12:54:24.316153 2191137 main.go:141] libmachine: Reticulating splines...
	I0414 12:54:24.316163 2191137 client.go:171] duration metric: took 24.708811945s to LocalClient.Create
	I0414 12:54:24.316197 2191137 start.go:167] duration metric: took 24.70888583s to libmachine.API.Create "addons-102056"
	I0414 12:54:24.316215 2191137 start.go:293] postStartSetup for "addons-102056" (driver="kvm2")
	I0414 12:54:24.316232 2191137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:54:24.316253 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:24.316501 2191137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:54:24.316532 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:24.318708 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.319012 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.319041 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.319218 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:24.319417 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.319565 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:24.319804 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:24.398993 2191137 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:54:24.403154 2191137 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:54:24.403179 2191137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 12:54:24.403255 2191137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 12:54:24.403278 2191137 start.go:296] duration metric: took 87.050354ms for postStartSetup
	I0414 12:54:24.403316 2191137 main.go:141] libmachine: (addons-102056) Calling .GetConfigRaw
	I0414 12:54:24.404005 2191137 main.go:141] libmachine: (addons-102056) Calling .GetIP
	I0414 12:54:24.406930 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.407321 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.407348 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.407550 2191137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/config.json ...
	I0414 12:54:24.407727 2191137 start.go:128] duration metric: took 24.818786541s to createHost
	I0414 12:54:24.407751 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:24.410065 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.410368 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.410403 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.410527 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:24.410707 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.410882 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.411004 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:24.411139 2191137 main.go:141] libmachine: Using SSH client type: native
	I0414 12:54:24.411331 2191137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0414 12:54:24.411340 2191137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:54:24.509614 2191137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744635264.483205403
	
	I0414 12:54:24.509669 2191137 fix.go:216] guest clock: 1744635264.483205403
	I0414 12:54:24.509678 2191137 fix.go:229] Guest: 2025-04-14 12:54:24.483205403 +0000 UTC Remote: 2025-04-14 12:54:24.407739291 +0000 UTC m=+24.924950086 (delta=75.466112ms)
	I0414 12:54:24.509701 2191137 fix.go:200] guest clock delta is within tolerance: 75.466112ms
	I0414 12:54:24.509707 2191137 start.go:83] releasing machines lock for "addons-102056", held for 24.920866475s
	I0414 12:54:24.509745 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:24.510064 2191137 main.go:141] libmachine: (addons-102056) Calling .GetIP
	I0414 12:54:24.512886 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.513255 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.513279 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.513387 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:24.513908 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:24.514111 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:24.514218 2191137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:54:24.514271 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:24.514340 2191137 ssh_runner.go:195] Run: cat /version.json
	I0414 12:54:24.514369 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:24.516864 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.517113 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.517211 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.517237 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.517400 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:24.517487 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:24.517509 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:24.517570 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.517692 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:24.517763 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:24.517965 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:24.517974 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:24.518101 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:24.518274 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:24.615054 2191137 ssh_runner.go:195] Run: systemctl --version
	I0414 12:54:24.621166 2191137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:54:24.782332 2191137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:54:24.788573 2191137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:54:24.788636 2191137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:54:24.806199 2191137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:54:24.806225 2191137 start.go:495] detecting cgroup driver to use...
	I0414 12:54:24.806294 2191137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:54:24.822592 2191137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:54:24.836919 2191137 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:54:24.836985 2191137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:54:24.851307 2191137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:54:24.865049 2191137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:54:24.977109 2191137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:54:25.136239 2191137 docker.go:233] disabling docker service ...
	I0414 12:54:25.136325 2191137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:54:25.151282 2191137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:54:25.163957 2191137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:54:25.284559 2191137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:54:25.427729 2191137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:54:25.442099 2191137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:54:25.460644 2191137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 12:54:25.460747 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.471113 2191137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:54:25.471176 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.481706 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.492278 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.502633 2191137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:54:25.513236 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.524496 2191137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.542243 2191137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:54:25.553617 2191137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:54:25.563505 2191137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:54:25.563564 2191137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:54:25.576106 2191137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:54:25.585731 2191137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:54:25.694774 2191137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:54:25.783646 2191137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:54:25.783772 2191137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:54:25.788678 2191137 start.go:563] Will wait 60s for crictl version
	I0414 12:54:25.788775 2191137 ssh_runner.go:195] Run: which crictl
	I0414 12:54:25.792643 2191137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:54:25.839082 2191137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:54:25.839196 2191137 ssh_runner.go:195] Run: crio --version
	I0414 12:54:25.871157 2191137 ssh_runner.go:195] Run: crio --version
	I0414 12:54:25.900352 2191137 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 12:54:25.901509 2191137 main.go:141] libmachine: (addons-102056) Calling .GetIP
	I0414 12:54:25.904250 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:25.904654 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:25.904682 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:25.904925 2191137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 12:54:25.909239 2191137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:54:25.922396 2191137 kubeadm.go:883] updating cluster {Name:addons-102056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-102056 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:54:25.922510 2191137 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:54:25.922564 2191137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:54:25.955109 2191137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 12:54:25.955187 2191137 ssh_runner.go:195] Run: which lz4
	I0414 12:54:25.959229 2191137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:54:25.963270 2191137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:54:25.963300 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 12:54:27.341856 2191137 crio.go:462] duration metric: took 1.382652244s to copy over tarball
	I0414 12:54:27.341956 2191137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:54:29.474288 2191137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132293871s)
	I0414 12:54:29.474325 2191137 crio.go:469] duration metric: took 2.132422342s to extract the tarball
	I0414 12:54:29.474336 2191137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:54:29.512664 2191137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:54:29.556286 2191137 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 12:54:29.556312 2191137 cache_images.go:84] Images are preloaded, skipping loading
	I0414 12:54:29.556321 2191137 kubeadm.go:934] updating node { 192.168.39.15 8443 v1.32.2 crio true true} ...
	I0414 12:54:29.556424 2191137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-102056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-102056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:54:29.556506 2191137 ssh_runner.go:195] Run: crio config
	I0414 12:54:29.602060 2191137 cni.go:84] Creating CNI manager for ""
	I0414 12:54:29.602105 2191137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:54:29.602135 2191137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 12:54:29.602179 2191137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-102056 NodeName:addons-102056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 12:54:29.602322 2191137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-102056"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:54:29.602399 2191137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 12:54:29.613261 2191137 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:54:29.613339 2191137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:54:29.623197 2191137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 12:54:29.639627 2191137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:54:29.655852 2191137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0414 12:54:29.671678 2191137 ssh_runner.go:195] Run: grep 192.168.39.15	control-plane.minikube.internal$ /etc/hosts
	I0414 12:54:29.675465 2191137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:54:29.688257 2191137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:54:29.821275 2191137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:54:29.838245 2191137 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056 for IP: 192.168.39.15
	I0414 12:54:29.838278 2191137 certs.go:194] generating shared ca certs ...
	I0414 12:54:29.838302 2191137 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:29.838501 2191137 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 12:54:29.941036 2191137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt ...
	I0414 12:54:29.941068 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt: {Name:mk544992f42fc1cd13f00c1b13a5c3ecc7f3c86c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:29.941276 2191137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key ...
	I0414 12:54:29.941293 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key: {Name:mk5eba51627b342ef9d4f454bf88c5de8e11ffb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:29.941400 2191137 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 12:54:30.062235 2191137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt ...
	I0414 12:54:30.062268 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt: {Name:mkaf56384e3b24f01509e3a58731a910dba6380c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:30.062460 2191137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key ...
	I0414 12:54:30.062474 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key: {Name:mkec9510aae08fa229af75b50802f1e0ece6d65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:30.062586 2191137 certs.go:256] generating profile certs ...
	I0414 12:54:30.062651 2191137 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.key
	I0414 12:54:30.062678 2191137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt with IP's: []
	I0414 12:54:31.261425 2191137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt ...
	I0414 12:54:31.261465 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: {Name:mke61776d70ec3fec297010a2959fc6cf9afedc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.261680 2191137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.key ...
	I0414 12:54:31.261698 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.key: {Name:mkc1fcb8516d3f59027ac42f459dfe9d1a85d3f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.261811 2191137 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key.f4ebebf6
	I0414 12:54:31.261833 2191137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt.f4ebebf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15]
	I0414 12:54:31.293674 2191137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt.f4ebebf6 ...
	I0414 12:54:31.293703 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt.f4ebebf6: {Name:mk8ad133b1413a8c68ae215a2749903582831f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.293903 2191137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key.f4ebebf6 ...
	I0414 12:54:31.293926 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key.f4ebebf6: {Name:mk5fb3beb24cb43c8e9b355d224be166d9a6e7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.294036 2191137 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt.f4ebebf6 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt
	I0414 12:54:31.294140 2191137 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key.f4ebebf6 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key
	I0414 12:54:31.294195 2191137 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.key
	I0414 12:54:31.294215 2191137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.crt with IP's: []
	I0414 12:54:31.449562 2191137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.crt ...
	I0414 12:54:31.449598 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.crt: {Name:mk1677118e6f96b3a46760462cd7ea6e3445e538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.449787 2191137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.key ...
	I0414 12:54:31.449808 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.key: {Name:mk072e0bd04f71d9faf075f5bc7fe3e969e0aedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:31.450028 2191137 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:54:31.450078 2191137 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:54:31.450103 2191137 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:54:31.450124 2191137 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 12:54:31.450924 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:54:31.482269 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 12:54:31.514156 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:54:31.541841 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:54:31.565709 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 12:54:31.589406 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 12:54:31.612757 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:54:31.640609 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 12:54:31.665120 2191137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:54:31.689279 2191137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:54:31.706206 2191137 ssh_runner.go:195] Run: openssl version
	I0414 12:54:31.712165 2191137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:54:31.722744 2191137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:54:31.727340 2191137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:54:31.727396 2191137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:54:31.733427 2191137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:54:31.743868 2191137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:54:31.748071 2191137 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 12:54:31.748151 2191137 kubeadm.go:392] StartCluster: {Name:addons-102056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-102056 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:54:31.748263 2191137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:54:31.748307 2191137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:54:31.785206 2191137 cri.go:89] found id: ""
	I0414 12:54:31.785292 2191137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:54:31.795483 2191137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:54:31.805058 2191137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:54:31.814340 2191137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:54:31.814357 2191137 kubeadm.go:157] found existing configuration files:
	
	I0414 12:54:31.814408 2191137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:54:31.823010 2191137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:54:31.823067 2191137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:54:31.831821 2191137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:54:31.840272 2191137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:54:31.840330 2191137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:54:31.849343 2191137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:54:31.857794 2191137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:54:31.857858 2191137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:54:31.866768 2191137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:54:31.875332 2191137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:54:31.875391 2191137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:54:31.884477 2191137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:54:31.939942 2191137 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 12:54:31.940015 2191137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:54:32.052374 2191137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:54:32.052556 2191137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:54:32.052714 2191137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 12:54:32.060895 2191137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:54:32.199012 2191137 out.go:235]   - Generating certificates and keys ...
	I0414 12:54:32.199194 2191137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:54:32.199279 2191137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:54:32.199400 2191137 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 12:54:32.509248 2191137 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 12:54:32.661573 2191137 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 12:54:32.808788 2191137 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 12:54:32.940129 2191137 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 12:54:32.940276 2191137 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-102056 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I0414 12:54:33.124007 2191137 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 12:54:33.124301 2191137 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-102056 localhost] and IPs [192.168.39.15 127.0.0.1 ::1]
	I0414 12:54:33.472142 2191137 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 12:54:33.568178 2191137 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 12:54:33.825956 2191137 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 12:54:33.826173 2191137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:54:34.210897 2191137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:54:34.329861 2191137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 12:54:34.502878 2191137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:54:34.594518 2191137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:54:34.762810 2191137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:54:34.763435 2191137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:54:34.765924 2191137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:54:34.767443 2191137 out.go:235]   - Booting up control plane ...
	I0414 12:54:34.767558 2191137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:54:34.769631 2191137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:54:34.770509 2191137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:54:34.786530 2191137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:54:34.792546 2191137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:54:34.792626 2191137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:54:34.930295 2191137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 12:54:34.930475 2191137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 12:54:35.431127 2191137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.250172ms
	I0414 12:54:35.431248 2191137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 12:54:40.429654 2191137 kubeadm.go:310] [api-check] The API server is healthy after 5.001572559s
	I0414 12:54:40.442736 2191137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 12:54:40.456829 2191137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 12:54:40.494104 2191137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 12:54:40.494409 2191137 kubeadm.go:310] [mark-control-plane] Marking the node addons-102056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 12:54:40.504743 2191137 kubeadm.go:310] [bootstrap-token] Using token: mtjt7h.zyu66sdaakzw4byi
	I0414 12:54:40.505878 2191137 out.go:235]   - Configuring RBAC rules ...
	I0414 12:54:40.506060 2191137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 12:54:40.517843 2191137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 12:54:40.527129 2191137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 12:54:40.531735 2191137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 12:54:40.537030 2191137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 12:54:40.544338 2191137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 12:54:40.836207 2191137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 12:54:41.267063 2191137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 12:54:41.835392 2191137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 12:54:41.836425 2191137 kubeadm.go:310] 
	I0414 12:54:41.836496 2191137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 12:54:41.836503 2191137 kubeadm.go:310] 
	I0414 12:54:41.836587 2191137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 12:54:41.836594 2191137 kubeadm.go:310] 
	I0414 12:54:41.836615 2191137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 12:54:41.836671 2191137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 12:54:41.836715 2191137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 12:54:41.836721 2191137 kubeadm.go:310] 
	I0414 12:54:41.836817 2191137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 12:54:41.836839 2191137 kubeadm.go:310] 
	I0414 12:54:41.836891 2191137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 12:54:41.836906 2191137 kubeadm.go:310] 
	I0414 12:54:41.836959 2191137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 12:54:41.837023 2191137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 12:54:41.837144 2191137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 12:54:41.837164 2191137 kubeadm.go:310] 
	I0414 12:54:41.837270 2191137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 12:54:41.837378 2191137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 12:54:41.837391 2191137 kubeadm.go:310] 
	I0414 12:54:41.837519 2191137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mtjt7h.zyu66sdaakzw4byi \
	I0414 12:54:41.837657 2191137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 12:54:41.837701 2191137 kubeadm.go:310] 	--control-plane 
	I0414 12:54:41.837712 2191137 kubeadm.go:310] 
	I0414 12:54:41.837832 2191137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 12:54:41.837855 2191137 kubeadm.go:310] 
	I0414 12:54:41.837964 2191137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mtjt7h.zyu66sdaakzw4byi \
	I0414 12:54:41.838102 2191137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 12:54:41.838813 2191137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:54:41.838956 2191137 cni.go:84] Creating CNI manager for ""
	I0414 12:54:41.838979 2191137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:54:41.841115 2191137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:54:41.842153 2191137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:54:41.852989 2191137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:54:41.878004 2191137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:54:41.878128 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:41.878181 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-102056 minikube.k8s.io/updated_at=2025_04_14T12_54_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=addons-102056 minikube.k8s.io/primary=true
	I0414 12:54:41.911479 2191137 ops.go:34] apiserver oom_adj: -16
	I0414 12:54:42.002007 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:42.503084 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:43.002705 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:43.502516 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:44.002165 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:44.502211 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:45.002892 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:45.503053 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:46.002549 2191137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:54:46.092101 2191137 kubeadm.go:1113] duration metric: took 4.214090308s to wait for elevateKubeSystemPrivileges
	I0414 12:54:46.092161 2191137 kubeadm.go:394] duration metric: took 14.344006847s to StartCluster
	I0414 12:54:46.092190 2191137 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:46.092314 2191137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 12:54:46.092881 2191137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:54:46.093080 2191137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 12:54:46.093110 2191137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:54:46.093184 2191137 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 12:54:46.093319 2191137 addons.go:69] Setting yakd=true in profile "addons-102056"
	I0414 12:54:46.093356 2191137 addons.go:238] Setting addon yakd=true in "addons-102056"
	I0414 12:54:46.093349 2191137 config.go:182] Loaded profile config "addons-102056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:54:46.093400 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093411 2191137 addons.go:69] Setting inspektor-gadget=true in profile "addons-102056"
	I0414 12:54:46.093427 2191137 addons.go:238] Setting addon inspektor-gadget=true in "addons-102056"
	I0414 12:54:46.093449 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093609 2191137 addons.go:69] Setting storage-provisioner=true in profile "addons-102056"
	I0414 12:54:46.093687 2191137 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-102056"
	I0414 12:54:46.093705 2191137 addons.go:238] Setting addon storage-provisioner=true in "addons-102056"
	I0414 12:54:46.093717 2191137 addons.go:69] Setting ingress=true in profile "addons-102056"
	I0414 12:54:46.093732 2191137 addons.go:238] Setting addon ingress=true in "addons-102056"
	I0414 12:54:46.093714 2191137 addons.go:69] Setting volcano=true in profile "addons-102056"
	I0414 12:54:46.093775 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093781 2191137 addons.go:69] Setting gcp-auth=true in profile "addons-102056"
	I0414 12:54:46.093606 2191137 addons.go:69] Setting default-storageclass=true in profile "addons-102056"
	I0414 12:54:46.093852 2191137 mustload.go:65] Loading cluster: addons-102056
	I0414 12:54:46.093855 2191137 addons.go:238] Setting addon volcano=true in "addons-102056"
	I0414 12:54:46.093887 2191137 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-102056"
	I0414 12:54:46.093941 2191137 addons.go:69] Setting volumesnapshots=true in profile "addons-102056"
	I0414 12:54:46.093968 2191137 addons.go:69] Setting ingress-dns=true in profile "addons-102056"
	I0414 12:54:46.093988 2191137 addons.go:238] Setting addon ingress-dns=true in "addons-102056"
	I0414 12:54:46.094011 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.094023 2191137 config.go:182] Loaded profile config "addons-102056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:54:46.093776 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093707 2191137 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-102056"
	I0414 12:54:46.093947 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.096908 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093948 2191137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-102056"
	I0414 12:54:46.097201 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.097216 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.097226 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097230 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.097234 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097267 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097275 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.097318 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097381 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097383 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.093950 2191137 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-102056"
	I0414 12:54:46.097377 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.097432 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.096914 2191137 out.go:177] * Verifying Kubernetes components...
	I0414 12:54:46.093960 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.093968 2191137 addons.go:69] Setting registry=true in profile "addons-102056"
	I0414 12:54:46.097810 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097816 2191137 addons.go:238] Setting addon registry=true in "addons-102056"
	I0414 12:54:46.097828 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.097846 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.097948 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.098029 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.098055 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.093958 2191137 addons.go:69] Setting metrics-server=true in profile "addons-102056"
	I0414 12:54:46.098258 2191137 addons.go:238] Setting addon metrics-server=true in "addons-102056"
	I0414 12:54:46.098301 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.098709 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.098742 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.093960 2191137 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-102056"
	I0414 12:54:46.098991 2191137 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-102056"
	I0414 12:54:46.099036 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.099419 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.099456 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.093977 2191137 addons.go:238] Setting addon volumesnapshots=true in "addons-102056"
	I0414 12:54:46.099572 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.093965 2191137 addons.go:69] Setting cloud-spanner=true in profile "addons-102056"
	I0414 12:54:46.099674 2191137 addons.go:238] Setting addon cloud-spanner=true in "addons-102056"
	I0414 12:54:46.099695 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.099792 2191137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:54:46.093970 2191137 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-102056"
	I0414 12:54:46.099931 2191137 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-102056"
	I0414 12:54:46.099958 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.100103 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.100215 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.100361 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.100398 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.118944 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0414 12:54:46.119129 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0414 12:54:46.119210 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0414 12:54:46.119584 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.119700 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.133286 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.136911 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.136943 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.137079 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.137088 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.137212 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.137233 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.137514 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.137541 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.137591 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0414 12:54:46.137649 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0414 12:54:46.137693 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.137721 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.137787 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0414 12:54:46.137825 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0414 12:54:46.137947 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43519
	I0414 12:54:46.138048 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.138072 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.138541 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.138745 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.138897 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.139016 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.139120 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.139226 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.139232 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.139233 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.139619 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.139676 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.140211 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.140280 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.140855 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.140864 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.140990 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.141207 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.141222 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.141400 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.141749 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.141780 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.141794 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.141803 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.141873 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.141969 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.142148 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.142164 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.142665 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.142986 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.143151 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.143188 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.143474 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.143222 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.143543 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.143559 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.146530 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.146591 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.158720 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.160042 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.160101 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.162322 2191137 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-102056"
	I0414 12:54:46.162375 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.162834 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.162881 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.163109 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0414 12:54:46.163375 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.165905 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0414 12:54:46.169587 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.170201 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.170325 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.170347 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.171130 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.171365 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.173548 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0414 12:54:46.174138 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.175394 2191137 addons.go:238] Setting addon default-storageclass=true in "addons-102056"
	I0414 12:54:46.175436 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:46.175925 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.175974 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.176390 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.176417 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.189491 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.189729 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0414 12:54:46.189865 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0414 12:54:46.189945 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0414 12:54:46.190150 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.190165 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.190256 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42451
	I0414 12:54:46.190428 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0414 12:54:46.191451 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.191522 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.191571 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.191635 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.192111 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.192146 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.192440 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.192456 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.192593 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.192604 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.192854 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.192936 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.193002 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.193144 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.193156 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.193405 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.193422 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.193542 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.193554 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.193851 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.193890 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.193915 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.194126 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.194233 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.194274 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.194603 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.194634 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.194603 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.194741 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.194830 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.194864 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.195192 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.195733 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.195790 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.199046 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0414 12:54:46.199587 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.200166 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.200184 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.200635 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.200856 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.202932 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.204522 2191137 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 12:54:46.205891 2191137 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 12:54:46.205913 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 12:54:46.206031 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.210644 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.211354 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.211379 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.211622 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.212100 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.212289 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.212465 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.214977 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0414 12:54:46.215740 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.215827 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0414 12:54:46.216218 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.216529 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.216547 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.216708 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.216719 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.217177 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.217650 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.217709 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.218342 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.218400 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.218679 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0414 12:54:46.218871 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42035
	I0414 12:54:46.219187 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.219264 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.219615 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.219631 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.219992 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.220008 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.220063 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.220216 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.220436 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.220621 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.220676 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.222298 2191137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:54:46.222692 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0414 12:54:46.222849 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.223195 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.223362 2191137 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:54:46.223381 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:54:46.223406 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.223791 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.224410 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.224433 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.224411 2191137 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.32
	I0414 12:54:46.224523 2191137 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 12:54:46.224944 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.225813 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.225870 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.226141 2191137 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 12:54:46.226153 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 12:54:46.226170 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.226725 2191137 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 12:54:46.226739 2191137 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 12:54:46.226756 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.226781 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0414 12:54:46.227924 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.228486 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.228504 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.229156 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.229729 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:46.229767 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:46.231241 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.231277 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.231875 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.231911 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.232102 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.232274 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.232334 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.232480 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.232540 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.232557 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.232694 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.233156 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.233219 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.233238 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.233271 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.233475 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.233538 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.233733 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.233765 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.233923 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.234410 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.236157 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0414 12:54:46.236286 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0414 12:54:46.236685 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.236860 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.237357 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.237377 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.237823 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.238031 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.238318 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.238335 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.238685 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.238880 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.239535 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0414 12:54:46.239782 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.240072 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.240803 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.241257 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0414 12:54:46.241475 2191137 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 12:54:46.241675 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.242222 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.242245 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.242393 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0414 12:54:46.242411 2191137 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 12:54:46.242428 2191137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 12:54:46.242456 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.242682 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.242851 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.242918 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0414 12:54:46.243009 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 12:54:46.243060 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.243523 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.243545 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.243552 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.243855 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 12:54:46.243881 2191137 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 12:54:46.243900 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.244123 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.244312 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.244329 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.244593 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.244618 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.244982 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.245466 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.246311 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.246386 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.246603 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.247270 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.248581 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.248723 2191137 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0414 12:54:46.248983 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.249067 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.249104 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.249268 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.249330 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:46.249341 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:46.249850 2191137 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 12:54:46.249872 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 12:54:46.249892 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.251518 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.251524 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.251545 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0414 12:54:46.251571 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:46.251592 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:46.251597 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.251605 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:46.251615 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:46.251623 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:46.251548 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.251677 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.251712 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.251795 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.251894 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.252177 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.252171 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.252404 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.252720 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.252898 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.252915 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.253096 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.253286 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.253509 2191137 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 12:54:46.253548 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:46.253583 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.253637 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:46.253645 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	W0414 12:54:46.253734 2191137 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 12:54:46.254313 2191137 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 12:54:46.254335 2191137 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 12:54:46.254354 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.254443 2191137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 12:54:46.254737 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.255164 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.255193 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.255438 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.255658 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.255804 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.255952 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.256321 2191137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:54:46.256721 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.257873 2191137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:54:46.257897 2191137 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 12:54:46.257970 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.258383 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.258404 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.258589 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.258757 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.258881 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.259040 2191137 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 12:54:46.259054 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 12:54:46.259060 2191137 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 12:54:46.259070 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.259076 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 12:54:46.259093 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.259165 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.261447 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0414 12:54:46.261542 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0414 12:54:46.261859 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.261948 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.262383 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.262395 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.262471 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.262477 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.263001 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.263351 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.263536 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.263690 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.263724 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.265820 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.266047 2191137 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:54:46.266060 2191137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:54:46.266064 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.266075 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.267479 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 12:54:46.268653 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.268678 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.268750 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 12:54:46.269060 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.269433 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.269504 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.269525 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.269608 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.269659 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.269880 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.269858 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.269858 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.270054 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.270075 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.270266 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.270476 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.270497 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.270533 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.270756 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.271007 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.271189 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.271365 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 12:54:46.272324 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 12:54:46.272714 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0414 12:54:46.273203 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.273657 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.273682 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.274040 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.274264 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.274270 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 12:54:46.275466 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 12:54:46.275583 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0414 12:54:46.276005 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:46.276148 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.276473 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:46.276489 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:46.276877 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:46.277094 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:46.277246 2191137 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 12:54:46.277280 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 12:54:46.278436 2191137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 12:54:46.278458 2191137 out.go:177]   - Using image docker.io/busybox:stable
	I0414 12:54:46.278945 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:46.279716 2191137 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 12:54:46.279733 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 12:54:46.279747 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.279792 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 12:54:46.279800 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 12:54:46.279809 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.280170 2191137 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 12:54:46.281626 2191137 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 12:54:46.282723 2191137 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 12:54:46.282743 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 12:54:46.282760 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:46.283364 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.283393 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.283410 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.283577 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.283792 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.283948 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.284012 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.284077 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.284427 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.284443 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.284630 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.284814 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.284944 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.285053 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.286194 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.286550 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:46.286560 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:46.286718 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:46.286889 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:46.286991 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:46.287092 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:46.505857 2191137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:54:46.505932 2191137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 12:54:46.707074 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 12:54:46.714493 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:54:46.718766 2191137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 12:54:46.718784 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 12:54:46.744310 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:54:46.791146 2191137 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 12:54:46.791182 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 12:54:46.809569 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 12:54:46.815562 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 12:54:46.835545 2191137 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 12:54:46.835584 2191137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 12:54:46.851743 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 12:54:46.864492 2191137 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 12:54:46.864517 2191137 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 12:54:46.874436 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 12:54:46.898733 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 12:54:46.898765 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 12:54:46.904287 2191137 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 12:54:46.904308 2191137 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 12:54:46.908798 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 12:54:46.962688 2191137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 12:54:46.962721 2191137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 12:54:47.031883 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 12:54:47.082072 2191137 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 12:54:47.082104 2191137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 12:54:47.130822 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 12:54:47.130853 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 12:54:47.182529 2191137 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 12:54:47.182557 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 12:54:47.198880 2191137 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 12:54:47.198906 2191137 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 12:54:47.241122 2191137 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 12:54:47.241156 2191137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 12:54:47.259553 2191137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:54:47.259588 2191137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 12:54:47.357329 2191137 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 12:54:47.357363 2191137 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 12:54:47.378347 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 12:54:47.424959 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 12:54:47.424989 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 12:54:47.448378 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 12:54:47.448416 2191137 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 12:54:47.461622 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:54:47.514778 2191137 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 12:54:47.514807 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 12:54:47.663304 2191137 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:54:47.663332 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 12:54:47.703516 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 12:54:47.704997 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 12:54:47.705030 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 12:54:47.832664 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:54:48.020244 2191137 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 12:54:48.020291 2191137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 12:54:48.502355 2191137 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.9963809s)
	I0414 12:54:48.502439 2191137 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 12:54:48.502479 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.795366182s)
	I0414 12:54:48.502394 2191137 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.996494732s)
	I0414 12:54:48.502540 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:48.502718 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:48.503081 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:48.503106 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:48.503117 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:48.503125 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:48.503537 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:48.503573 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:48.503589 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:48.504626 2191137 node_ready.go:35] waiting up to 6m0s for node "addons-102056" to be "Ready" ...
	I0414 12:54:48.523827 2191137 node_ready.go:49] node "addons-102056" has status "Ready":"True"
	I0414 12:54:48.523857 2191137 node_ready.go:38] duration metric: took 19.20015ms for node "addons-102056" to be "Ready" ...
	I0414 12:54:48.523869 2191137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:54:48.535165 2191137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w59gx" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:48.617520 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 12:54:48.617573 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 12:54:48.819183 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 12:54:48.819222 2191137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 12:54:49.021275 2191137 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-102056" context rescaled to 1 replicas
	I0414 12:54:49.083120 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 12:54:49.083149 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 12:54:49.492264 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 12:54:49.492306 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 12:54:49.935083 2191137 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 12:54:49.935119 2191137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 12:54:50.170056 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 12:54:50.542468 2191137 pod_ready.go:103] pod "coredns-668d6bf9bc-w59gx" in "kube-system" namespace has status "Ready":"False"
	I0414 12:54:51.430918 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.716374109s)
	I0414 12:54:51.430980 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.430978 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.621375123s)
	I0414 12:54:51.431036 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431053 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431077 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.615489018s)
	I0414 12:54:51.431107 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.430994 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.430918 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.686577218s)
	I0414 12:54:51.431117 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431218 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431235 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431622 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.431633 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.431648 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.431655 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.431634 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.431663 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.431671 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.431657 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431680 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431681 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431688 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431704 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.431718 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.431726 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431734 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431784 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.431798 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.431807 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.431815 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.431940 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.431958 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.431997 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.432003 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.432045 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.432084 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.432091 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.432128 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.432139 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.432974 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.433029 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:51.468639 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.468660 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.468969 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:51.468970 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.469082 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	W0414 12:54:51.469215 2191137 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0414 12:54:51.473384 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:51.473400 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:51.473799 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:51.473815 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:52.597368 2191137 pod_ready.go:103] pod "coredns-668d6bf9bc-w59gx" in "kube-system" namespace has status "Ready":"False"
	I0414 12:54:53.051572 2191137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 12:54:53.051623 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:53.055738 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:53.056436 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:53.056473 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:53.056648 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:53.056839 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:53.057024 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:53.057121 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:53.402801 2191137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 12:54:53.510648 2191137 addons.go:238] Setting addon gcp-auth=true in "addons-102056"
	I0414 12:54:53.510727 2191137 host.go:66] Checking if "addons-102056" exists ...
	I0414 12:54:53.511230 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:53.511282 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:53.527867 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0414 12:54:53.528403 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:53.528969 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:53.529001 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:53.529398 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:53.530045 2191137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:54:53.530088 2191137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:54:53.546858 2191137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0414 12:54:53.547447 2191137 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:54:53.547926 2191137 main.go:141] libmachine: Using API Version  1
	I0414 12:54:53.547950 2191137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:54:53.548416 2191137 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:54:53.548656 2191137 main.go:141] libmachine: (addons-102056) Calling .GetState
	I0414 12:54:53.550688 2191137 main.go:141] libmachine: (addons-102056) Calling .DriverName
	I0414 12:54:53.550930 2191137 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 12:54:53.550952 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHHostname
	I0414 12:54:53.554802 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:53.555309 2191137 main.go:141] libmachine: (addons-102056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:18:7d", ip: ""} in network mk-addons-102056: {Iface:virbr1 ExpiryTime:2025-04-14 13:54:15 +0000 UTC Type:0 Mac:52:54:00:6a:18:7d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:addons-102056 Clientid:01:52:54:00:6a:18:7d}
	I0414 12:54:53.555344 2191137 main.go:141] libmachine: (addons-102056) DBG | domain addons-102056 has defined IP address 192.168.39.15 and MAC address 52:54:00:6a:18:7d in network mk-addons-102056
	I0414 12:54:53.555476 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHPort
	I0414 12:54:53.555664 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHKeyPath
	I0414 12:54:53.555828 2191137 main.go:141] libmachine: (addons-102056) Calling .GetSSHUsername
	I0414 12:54:53.555989 2191137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/addons-102056/id_rsa Username:docker}
	I0414 12:54:53.637700 2191137 pod_ready.go:93] pod "coredns-668d6bf9bc-w59gx" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:53.637728 2191137 pod_ready.go:82] duration metric: took 5.102531786s for pod "coredns-668d6bf9bc-w59gx" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:53.637741 2191137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zrwsv" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.190833 2191137 pod_ready.go:93] pod "coredns-668d6bf9bc-zrwsv" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.190859 2191137 pod_ready.go:82] duration metric: took 553.111053ms for pod "coredns-668d6bf9bc-zrwsv" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.190869 2191137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.230294 2191137 pod_ready.go:93] pod "etcd-addons-102056" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.230323 2191137 pod_ready.go:82] duration metric: took 39.447348ms for pod "etcd-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.230338 2191137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.245966 2191137 pod_ready.go:93] pod "kube-apiserver-addons-102056" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.245996 2191137 pod_ready.go:82] duration metric: took 15.648839ms for pod "kube-apiserver-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.246009 2191137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.256348 2191137 pod_ready.go:93] pod "kube-controller-manager-addons-102056" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.256372 2191137 pod_ready.go:82] duration metric: took 10.354862ms for pod "kube-controller-manager-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.256380 2191137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f7vbt" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.354784 2191137 pod_ready.go:93] pod "kube-proxy-f7vbt" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.354812 2191137 pod_ready.go:82] duration metric: took 98.424817ms for pod "kube-proxy-f7vbt" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.354822 2191137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.674456 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.799980938s)
	I0414 12:54:54.674531 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.674531 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.765698168s)
	I0414 12:54:54.674555 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.674576 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.674591 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.674641 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.642716096s)
	I0414 12:54:54.674664 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.296286054s)
	I0414 12:54:54.674677 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.674681 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.674693 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.674745 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.213092693s)
	I0414 12:54:54.674773 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.674785 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.674694 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675107 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675120 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675129 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.675136 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675158 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.675160 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675179 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675190 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.675195 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.675202 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675181 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675213 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675221 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.675220 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.675226 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675236 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675245 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.675249 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675259 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675261 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675269 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.675281 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675228 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.675625 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675634 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.675683 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.675711 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.675717 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.677211 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.677241 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.677247 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.677952 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.677975 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.678006 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.678013 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.678024 2191137 addons.go:479] Verifying addon metrics-server=true in "addons-102056"
	I0414 12:54:54.678106 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.678116 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.678134 2191137 addons.go:479] Verifying addon registry=true in "addons-102056"
	I0414 12:54:54.678516 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.974942654s)
	I0414 12:54:54.678579 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.678613 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.678676 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.845965205s)
	W0414 12:54:54.678737 2191137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 12:54:54.678828 2191137 retry.go:31] will retry after 230.735244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 12:54:54.678953 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.678963 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.679051 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.679061 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.679068 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.679821 2191137 out.go:177] * Verifying registry addon...
	I0414 12:54:54.679932 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.679941 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.679956 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.680047 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.828263884s)
	I0414 12:54:54.680081 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.680094 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.680330 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.680356 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.680369 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.680377 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:54.680383 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:54.680711 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:54.680716 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:54.680749 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:54.680759 2191137 addons.go:479] Verifying addon ingress=true in "addons-102056"
	I0414 12:54:54.681302 2191137 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-102056 service yakd-dashboard -n yakd-dashboard
	
	I0414 12:54:54.682230 2191137 out.go:177] * Verifying ingress addon...
	I0414 12:54:54.682231 2191137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 12:54:54.684155 2191137 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 12:54:54.685376 2191137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 12:54:54.685393 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:54.689426 2191137 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 12:54:54.689442 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:54.741907 2191137 pod_ready.go:93] pod "kube-scheduler-addons-102056" in "kube-system" namespace has status "Ready":"True"
	I0414 12:54:54.741933 2191137 pod_ready.go:82] duration metric: took 387.104673ms for pod "kube-scheduler-addons-102056" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.741943 2191137 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace to be "Ready" ...
	I0414 12:54:54.910131 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:54:55.188395 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:55.192258 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:55.696614 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:55.696816 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:56.218449 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:56.219216 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:56.289485 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.119379161s)
	I0414 12:54:56.289541 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:56.289558 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:56.289602 2191137 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.738644927s)
	I0414 12:54:56.289741 2191137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.379555702s)
	I0414 12:54:56.289794 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:56.289816 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:56.289859 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:56.289879 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:56.289886 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:56.289888 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:56.289904 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:56.290071 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:56.290090 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:56.290100 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:56.290115 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:56.290164 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:56.290181 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:56.290191 2191137 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-102056"
	I0414 12:54:56.290205 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:56.291599 2191137 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 12:54:56.291599 2191137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:54:56.291993 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:56.292033 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:56.292043 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:56.293444 2191137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 12:54:56.294157 2191137 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 12:54:56.295213 2191137 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 12:54:56.295229 2191137 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 12:54:56.302879 2191137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 12:54:56.302905 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:56.322669 2191137 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 12:54:56.322695 2191137 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 12:54:56.354734 2191137 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 12:54:56.354759 2191137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 12:54:56.373481 2191137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 12:54:56.685718 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:56.687601 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:56.747143 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:54:56.797695 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:57.196872 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:57.196903 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:57.197221 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:57.197242 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:57.197253 2191137 main.go:141] libmachine: Making call to close driver server
	I0414 12:54:57.197281 2191137 main.go:141] libmachine: (addons-102056) DBG | Closing plugin on server side
	I0414 12:54:57.197369 2191137 main.go:141] libmachine: (addons-102056) Calling .Close
	I0414 12:54:57.197661 2191137 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:54:57.197675 2191137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:54:57.198788 2191137 addons.go:479] Verifying addon gcp-auth=true in "addons-102056"
	I0414 12:54:57.200993 2191137 out.go:177] * Verifying gcp-auth addon...
	I0414 12:54:57.203020 2191137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 12:54:57.206651 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:57.206849 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:57.237482 2191137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 12:54:57.237504 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:57.299331 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:57.687866 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:57.688167 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:57.705476 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:57.797583 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:58.185936 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:58.187814 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:58.206232 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:58.296943 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:58.686104 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:58.688220 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:58.706392 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:58.796857 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:59.186406 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:59.187673 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:59.205954 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:59.248334 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:54:59.297659 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:54:59.687522 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:54:59.687651 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:54:59.705857 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:54:59.797139 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:00.187095 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:00.187789 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:00.206114 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:00.297060 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:00.687060 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:00.687718 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:00.706516 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:00.797826 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:01.187377 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:01.187841 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:01.206781 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:01.249372 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:01.297536 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:01.686619 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:01.688073 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:01.707328 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:01.830237 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:02.186516 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:02.188201 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:02.206682 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:02.297408 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:02.687412 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:02.688491 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:02.706157 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:02.797446 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:03.186438 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:03.187795 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:03.206291 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:03.297375 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:03.686910 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:03.688687 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:03.706508 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:03.746954 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:03.807337 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:04.188141 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:04.188259 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:04.207799 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:04.301313 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:04.686912 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:04.687714 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:04.705907 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:04.797151 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:05.187537 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:05.187648 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:05.206324 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:05.297518 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:05.685674 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:05.689001 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:05.706608 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:05.747236 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:05.797139 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:06.186840 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:06.186910 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:06.208334 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:06.297395 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:06.689490 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:06.691620 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:06.708135 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:06.797264 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:07.186369 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:07.187694 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:07.206077 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:07.297245 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:07.687484 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:07.688195 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:07.706171 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:07.749083 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:07.797316 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:08.191434 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:08.193137 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:08.207985 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:08.297797 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:08.686398 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:08.688198 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:08.706077 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:08.798051 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:09.186458 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:09.187280 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:09.205676 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:09.297585 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:09.687218 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:09.687605 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:09.706169 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:09.797152 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:10.187598 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:10.194362 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:10.206072 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:10.250262 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:10.306304 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:10.689405 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:10.689520 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:10.707029 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:10.796818 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:11.186747 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:11.187821 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:11.206106 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:11.297549 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:11.686695 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:11.687539 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:11.705655 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:11.797258 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:12.221888 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:12.222151 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:12.224334 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:12.506703 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:12.507000 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:12.685943 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:12.687621 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:12.705613 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:12.797155 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:13.187489 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:13.187650 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:13.206136 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:13.296907 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:13.687299 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:13.687447 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:13.705781 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:13.797067 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:14.186337 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:14.188137 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:14.205934 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:14.297426 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:14.756437 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:14.756517 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:14.756551 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:14.758755 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:14.796753 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:15.193911 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:15.202213 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:15.205960 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:15.297567 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:15.685554 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:15.687597 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:15.706433 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:15.797288 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:16.185957 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:16.188283 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:16.206548 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:16.297340 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:16.686734 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:16.687980 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:16.707104 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:17.117354 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:17.188056 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:17.188292 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:17.206277 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:17.247606 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:17.297059 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:17.687673 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:17.687730 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:17.706501 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:17.796983 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:18.186284 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:18.187816 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:18.206326 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:18.297507 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:18.685810 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:18.687614 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:18.705973 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:18.796971 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:19.186391 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:19.187832 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:19.205887 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:19.297971 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:19.686330 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:19.687695 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:19.705841 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:19.747556 2191137 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"False"
	I0414 12:55:19.796717 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:20.187001 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:20.188658 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:20.205538 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:20.247474 2191137 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace has status "Ready":"True"
	I0414 12:55:20.247497 2191137 pod_ready.go:82] duration metric: took 25.505547076s for pod "nvidia-device-plugin-daemonset-gxnwr" in "kube-system" namespace to be "Ready" ...
	I0414 12:55:20.247505 2191137 pod_ready.go:39] duration metric: took 31.723620436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:55:20.247525 2191137 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:55:20.247587 2191137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:55:20.266387 2191137 api_server.go:72] duration metric: took 34.173237921s to wait for apiserver process to appear ...
	I0414 12:55:20.266414 2191137 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:55:20.266432 2191137 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0414 12:55:20.271010 2191137 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0414 12:55:20.271875 2191137 api_server.go:141] control plane version: v1.32.2
	I0414 12:55:20.271907 2191137 api_server.go:131] duration metric: took 5.478417ms to wait for apiserver health ...
	I0414 12:55:20.271916 2191137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:55:20.290537 2191137 system_pods.go:59] 18 kube-system pods found
	I0414 12:55:20.290586 2191137 system_pods.go:61] "amd-gpu-device-plugin-bw2j9" [11127e4c-4549-4659-806b-f962e161c496] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0414 12:55:20.290597 2191137 system_pods.go:61] "coredns-668d6bf9bc-zrwsv" [a09846c0-bce2-4a0f-bfdd-852a49227a49] Running
	I0414 12:55:20.290607 2191137 system_pods.go:61] "csi-hostpath-attacher-0" [a7e46af2-ae66-4358-b776-728fcdb77c91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 12:55:20.290615 2191137 system_pods.go:61] "csi-hostpath-resizer-0" [d285642f-bb79-4e03-bdaf-62a3e8c464ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 12:55:20.290624 2191137 system_pods.go:61] "csi-hostpathplugin-56zdl" [bd8522d0-ea69-40e4-b758-fc2a38b768b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 12:55:20.290633 2191137 system_pods.go:61] "etcd-addons-102056" [197fd742-699e-486c-9f99-489279f81bf1] Running
	I0414 12:55:20.290639 2191137 system_pods.go:61] "kube-apiserver-addons-102056" [f85adb8c-9997-4da9-9704-dc5e33b07b8a] Running
	I0414 12:55:20.290647 2191137 system_pods.go:61] "kube-controller-manager-addons-102056" [fef3419f-c932-41fb-82eb-e18f5f489cde] Running
	I0414 12:55:20.290650 2191137 system_pods.go:61] "kube-ingress-dns-minikube" [e31ba3e0-b868-4363-912d-83962ddcc09e] Running
	I0414 12:55:20.290654 2191137 system_pods.go:61] "kube-proxy-f7vbt" [a7777a51-e192-433d-99d9-adde9cef3add] Running
	I0414 12:55:20.290660 2191137 system_pods.go:61] "kube-scheduler-addons-102056" [2c3cff80-4df1-4bd8-a57a-50e479574a9b] Running
	I0414 12:55:20.290672 2191137 system_pods.go:61] "metrics-server-7fbb699795-xdmb7" [26c6d03a-0446-4c50-8e92-c38f33472918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:55:20.290680 2191137 system_pods.go:61] "nvidia-device-plugin-daemonset-gxnwr" [f8c27fc5-108d-4c72-b7f8-84e3bba4a3f6] Running
	I0414 12:55:20.290691 2191137 system_pods.go:61] "registry-6c88467877-j2pj9" [465b5148-6e62-4e44-a183-a71768164039] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 12:55:20.290701 2191137 system_pods.go:61] "registry-proxy-sjhg2" [972e37a5-f490-4f90-aef0-cdf8b49da676] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 12:55:20.290714 2191137 system_pods.go:61] "snapshot-controller-68b874b76f-dkfhw" [6587aa05-4548-4e44-b383-34f03a7100ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:55:20.290728 2191137 system_pods.go:61] "snapshot-controller-68b874b76f-zxds2" [e6d766c4-de0e-4533-8c82-626dba416245] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:55:20.290735 2191137 system_pods.go:61] "storage-provisioner" [8c993ba5-23fd-439d-98f3-acbde5363ff4] Running
	I0414 12:55:20.290743 2191137 system_pods.go:74] duration metric: took 18.820031ms to wait for pod list to return data ...
	I0414 12:55:20.290757 2191137 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:55:20.293177 2191137 default_sa.go:45] found service account: "default"
	I0414 12:55:20.293197 2191137 default_sa.go:55] duration metric: took 2.43012ms for default service account to be created ...
	I0414 12:55:20.293204 2191137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 12:55:20.295994 2191137 system_pods.go:86] 18 kube-system pods found
	I0414 12:55:20.296020 2191137 system_pods.go:89] "amd-gpu-device-plugin-bw2j9" [11127e4c-4549-4659-806b-f962e161c496] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0414 12:55:20.296026 2191137 system_pods.go:89] "coredns-668d6bf9bc-zrwsv" [a09846c0-bce2-4a0f-bfdd-852a49227a49] Running
	I0414 12:55:20.296032 2191137 system_pods.go:89] "csi-hostpath-attacher-0" [a7e46af2-ae66-4358-b776-728fcdb77c91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 12:55:20.296038 2191137 system_pods.go:89] "csi-hostpath-resizer-0" [d285642f-bb79-4e03-bdaf-62a3e8c464ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 12:55:20.296044 2191137 system_pods.go:89] "csi-hostpathplugin-56zdl" [bd8522d0-ea69-40e4-b758-fc2a38b768b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 12:55:20.296048 2191137 system_pods.go:89] "etcd-addons-102056" [197fd742-699e-486c-9f99-489279f81bf1] Running
	I0414 12:55:20.296054 2191137 system_pods.go:89] "kube-apiserver-addons-102056" [f85adb8c-9997-4da9-9704-dc5e33b07b8a] Running
	I0414 12:55:20.296057 2191137 system_pods.go:89] "kube-controller-manager-addons-102056" [fef3419f-c932-41fb-82eb-e18f5f489cde] Running
	I0414 12:55:20.296061 2191137 system_pods.go:89] "kube-ingress-dns-minikube" [e31ba3e0-b868-4363-912d-83962ddcc09e] Running
	I0414 12:55:20.296065 2191137 system_pods.go:89] "kube-proxy-f7vbt" [a7777a51-e192-433d-99d9-adde9cef3add] Running
	I0414 12:55:20.296069 2191137 system_pods.go:89] "kube-scheduler-addons-102056" [2c3cff80-4df1-4bd8-a57a-50e479574a9b] Running
	I0414 12:55:20.296074 2191137 system_pods.go:89] "metrics-server-7fbb699795-xdmb7" [26c6d03a-0446-4c50-8e92-c38f33472918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:55:20.296077 2191137 system_pods.go:89] "nvidia-device-plugin-daemonset-gxnwr" [f8c27fc5-108d-4c72-b7f8-84e3bba4a3f6] Running
	I0414 12:55:20.296083 2191137 system_pods.go:89] "registry-6c88467877-j2pj9" [465b5148-6e62-4e44-a183-a71768164039] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 12:55:20.296089 2191137 system_pods.go:89] "registry-proxy-sjhg2" [972e37a5-f490-4f90-aef0-cdf8b49da676] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 12:55:20.296095 2191137 system_pods.go:89] "snapshot-controller-68b874b76f-dkfhw" [6587aa05-4548-4e44-b383-34f03a7100ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:55:20.296103 2191137 system_pods.go:89] "snapshot-controller-68b874b76f-zxds2" [e6d766c4-de0e-4533-8c82-626dba416245] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:55:20.296107 2191137 system_pods.go:89] "storage-provisioner" [8c993ba5-23fd-439d-98f3-acbde5363ff4] Running
	I0414 12:55:20.296114 2191137 system_pods.go:126] duration metric: took 2.904716ms to wait for k8s-apps to be running ...
	I0414 12:55:20.296123 2191137 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 12:55:20.296173 2191137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:55:20.298076 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:20.310524 2191137 system_svc.go:56] duration metric: took 14.391961ms WaitForService to wait for kubelet
	I0414 12:55:20.310550 2191137 kubeadm.go:582] duration metric: took 34.21740543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:55:20.310574 2191137 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:55:20.313225 2191137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:55:20.313268 2191137 node_conditions.go:123] node cpu capacity is 2
	I0414 12:55:20.313289 2191137 node_conditions.go:105] duration metric: took 2.707471ms to run NodePressure ...
	I0414 12:55:20.313305 2191137 start.go:241] waiting for startup goroutines ...
	I0414 12:55:20.686834 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:20.687843 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:20.706387 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:20.797973 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:21.185806 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:21.187600 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:21.206448 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:21.301876 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:21.686637 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:21.687942 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:21.706472 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:21.796555 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:22.185970 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:22.187792 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:22.206209 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:22.297718 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:22.686955 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:22.687861 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:22.787467 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:22.797797 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:23.187148 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:23.187787 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:23.205824 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:23.297360 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:23.686501 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:23.687839 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:23.706035 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:23.798071 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:24.186252 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:24.188173 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:24.205339 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:24.297879 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:24.686052 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:24.688108 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:24.706923 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:24.797289 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:25.187091 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:25.187804 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:25.206127 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:25.297405 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:25.686317 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:25.687652 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:25.705719 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:25.797998 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:26.185989 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:26.188505 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:26.206413 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:26.297763 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:26.686232 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:26.687919 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:26.706068 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:26.797462 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:27.186663 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:27.188243 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:27.205686 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:27.297470 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:27.687631 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:27.688125 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:27.707304 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:27.798996 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:28.186298 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:28.188360 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:28.206441 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:28.298401 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:28.686372 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:28.688047 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:28.706611 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:28.796978 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:29.473371 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:29.473633 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:29.474237 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:29.474412 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:29.687484 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:29.688141 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:29.706995 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:29.797221 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:30.186367 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:30.187121 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:30.206439 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:30.299730 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:30.685603 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:30.687546 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:30.705642 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:30.796678 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:31.186495 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:31.187165 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:31.205853 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:31.296989 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:31.687631 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:31.687973 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:31.706356 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:31.798239 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:32.189458 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:32.189686 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:32.206461 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:32.297573 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:32.686353 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:32.688371 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:32.706029 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:32.797641 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:33.186608 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:33.189613 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:33.207337 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:33.298085 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:33.687418 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:33.687429 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:33.706227 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:33.798244 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:34.187662 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:34.187990 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:34.207243 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:34.297829 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:34.685983 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:34.687998 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:34.707023 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:34.797101 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:35.187617 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:35.189665 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:35.206755 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:35.297004 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:35.688673 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:35.689978 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:35.787992 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:35.797033 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:36.191460 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:36.193886 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:36.288657 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:36.296577 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:36.686063 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:36.688154 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:36.707116 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:36.797902 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:37.185870 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:37.187752 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:37.206716 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:37.297154 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:37.688131 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:37.688402 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:37.705768 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:37.797006 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:38.187052 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:38.187850 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:38.205996 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:38.297433 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:38.687461 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:38.687783 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:38.707136 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:38.797261 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:39.187608 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:39.188188 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:39.206207 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:39.297267 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:39.688069 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:39.688113 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:39.706534 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:39.797765 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:40.189712 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:40.189869 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:40.493456 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:40.493656 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:40.686922 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:40.687706 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:40.706395 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:40.798063 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:41.185607 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:41.187579 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:41.205878 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:41.297148 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:41.687229 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:41.687245 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:41.705907 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:41.797118 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:42.186711 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:42.188979 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:42.205531 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:42.296915 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:42.687269 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:42.687332 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:42.706071 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:42.797692 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:43.188428 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:43.188596 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:43.208474 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:43.297405 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:43.686911 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:43.687927 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:43.706263 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:43.797768 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:44.185887 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:44.187948 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:44.206263 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:44.297840 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:44.685740 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:44.687383 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:44.705576 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:44.797051 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:45.187921 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:45.187958 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:45.206685 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:45.296894 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:45.686313 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:45.688209 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:45.705554 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:45.797468 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:46.186544 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:46.188826 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:46.205184 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:46.297572 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:46.685794 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:46.687152 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:46.708130 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:46.809488 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:47.185872 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:55:47.188331 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:47.206430 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:47.298482 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:47.687570 2191137 kapi.go:107] duration metric: took 53.005333202s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 12:55:47.688581 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:47.705577 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:47.796530 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:48.187432 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:48.205813 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:48.297770 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:48.688220 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:48.705878 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:48.797539 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:49.191624 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:49.205953 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:49.299027 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:49.689842 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:49.705986 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:49.796929 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:50.188450 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:50.205652 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:50.300998 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:50.687854 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:50.706480 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:50.797598 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:51.187239 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:51.205752 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:51.302866 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:51.700898 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:51.708984 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:51.796995 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:52.187018 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:52.206810 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:52.296951 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:52.687418 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:52.705924 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:52.797687 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:53.191413 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:53.208905 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:53.297856 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:54.028825 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:54.029367 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:54.030014 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:54.188950 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:54.206678 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:54.297434 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:54.687501 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:54.706299 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:54.798094 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:55.188132 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:55.206561 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:55.296667 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:55.688532 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:55.706338 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:55.797925 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:56.188700 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:56.206840 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:56.298460 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:56.688136 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:56.706637 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:56.796870 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:57.188382 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:57.205581 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:57.296613 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:57.687803 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:57.706585 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:57.797563 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:58.189255 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:58.206448 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:58.297366 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:58.687922 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:58.707446 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:58.797677 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:59.188106 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:59.206917 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:59.297325 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:55:59.688184 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:55:59.706782 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:55:59.797262 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:00.374942 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:00.375222 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:00.375228 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:00.687962 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:00.707583 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:00.797401 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:01.187396 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:01.206748 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:01.297528 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:01.700291 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:01.706848 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:01.797184 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:02.189585 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:02.207287 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:02.297598 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:02.698349 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:02.708445 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:02.823025 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:03.529941 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:03.529981 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:03.530182 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:03.687291 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:03.791316 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:03.798338 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:04.188340 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:04.207415 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:04.297672 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:04.688185 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:04.705802 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:04.797097 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:05.187451 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:05.206375 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:05.297686 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:05.688212 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:05.706053 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:05.798209 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:06.192335 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:06.289335 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:06.298120 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:06.699410 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:06.790511 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:06.797120 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:07.188206 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:07.207137 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:07.298080 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:07.690745 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:07.706330 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:07.799423 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:08.188284 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:08.288505 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:08.297933 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:08.688197 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:08.706559 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:08.803880 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:09.189104 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:09.206573 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:09.296692 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:09.687181 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:09.706329 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:09.797298 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:10.187809 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:10.206310 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:10.297844 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:10.719716 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:10.728601 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:10.838400 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:11.192616 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:11.205984 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:11.302348 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:11.688053 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:11.707462 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:11.796313 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:12.188213 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:12.206357 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:12.298844 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:12.688534 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:12.706273 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:12.803557 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:13.187506 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:13.206233 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:13.298083 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:13.687865 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:13.706070 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:13.807194 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:14.187737 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:14.205985 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:14.297422 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:14.693694 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:14.706145 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:14.798575 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:15.187923 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:15.206150 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:15.302612 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:15.687668 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:15.706040 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:15.798360 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:16.187949 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:16.206324 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:16.681906 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:16.689136 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:16.707038 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:16.797518 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:17.188133 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:17.206822 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:17.296844 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:17.688488 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:17.706222 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:17.797253 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:18.188096 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:18.207427 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:18.298632 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:56:18.687378 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:18.715084 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:18.798191 2191137 kapi.go:107] duration metric: took 1m22.504741976s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 12:56:19.188206 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:19.205428 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:19.688189 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:19.706000 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:20.188217 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:20.206719 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:20.689042 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:20.706941 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:21.188246 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:21.205212 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:21.688291 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:21.705903 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:22.187511 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:22.206021 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:22.697148 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:22.706597 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:23.188322 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:23.205528 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:23.688379 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:23.705710 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:24.188593 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:24.206087 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:24.687733 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:24.707077 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:25.187818 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:25.206408 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:25.687991 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:25.709216 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:26.187447 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:26.206141 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:26.688104 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:26.706439 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:27.188248 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:27.205668 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:27.688150 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:27.706979 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:28.187560 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:28.206277 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:28.687500 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:28.706418 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:29.188438 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:29.205606 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:29.688108 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:29.705611 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:30.188299 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:30.205434 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:30.688149 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:30.705569 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:31.188002 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:31.206357 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:31.687513 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:31.706348 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:32.188432 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:32.205825 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:32.687372 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:32.706011 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:33.187808 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:33.206454 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:33.688176 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:33.706868 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:34.187748 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:34.206733 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:34.689008 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:34.706777 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:35.188669 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:35.206116 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:35.687788 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:35.706341 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:36.187905 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:36.206493 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:36.687392 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:36.706006 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:37.187627 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:37.206121 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:37.687520 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:37.706106 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:38.188283 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:38.207132 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:38.687765 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:38.706521 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:39.188690 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:39.206250 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:39.688099 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:39.709870 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:40.187765 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:40.206269 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:40.688627 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:40.705944 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:41.188064 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:41.206635 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:41.688136 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:41.705816 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:42.187418 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:42.205959 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:42.688456 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:42.706708 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:43.188595 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:43.205963 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:43.688357 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:43.705772 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:44.188581 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:44.206409 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:44.687786 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:44.706500 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:45.188494 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:45.206074 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:45.688656 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:45.707231 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:46.187115 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:46.206983 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:46.688217 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:46.705837 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:47.188146 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:47.206409 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:47.688236 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:47.705400 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:48.188425 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:48.205948 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:48.687391 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:48.705997 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:49.187834 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:49.207350 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:49.688040 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:49.706725 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:50.190157 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:50.206669 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:50.688554 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:50.706011 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:51.188165 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:51.207348 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:51.687799 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:51.706421 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:52.188028 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:52.206726 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:52.688253 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:52.706068 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:53.188065 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:53.206571 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:53.688253 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:53.706267 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:54.188111 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:54.206627 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:54.688289 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:54.706013 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:55.187846 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:55.206499 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:55.688856 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:55.706711 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:56.188029 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:56.206362 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:56.687533 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:56.706050 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:57.187879 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:57.206287 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:57.687724 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:57.705924 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:58.187696 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:58.206269 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:58.687666 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:58.706042 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:59.188040 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:59.206613 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:56:59.688028 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:56:59.706472 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:00.189317 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:00.206058 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:00.688177 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:00.707104 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:01.189013 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:01.206341 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:01.688003 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:01.706528 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:02.188283 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:02.205637 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:02.688244 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:02.705332 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:03.188124 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:03.206721 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:03.688329 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:03.705859 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:04.188206 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:04.206475 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:04.688221 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:04.705663 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:05.189017 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:05.206469 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:05.688245 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:05.705990 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:06.188558 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:06.206266 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:06.688364 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:06.706100 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:07.188409 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:07.206647 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:07.688705 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:07.706249 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:08.188227 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:08.206949 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:08.688947 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:08.712881 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:09.188459 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:09.206090 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:09.692986 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:09.706325 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:10.193510 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:10.206640 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:10.689417 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:10.706674 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:11.189333 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:11.206247 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:11.689763 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:11.706723 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:12.188622 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:12.207171 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:12.687789 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:12.706442 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:13.188709 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:13.205972 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:13.687704 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:13.705888 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:14.188287 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:14.206755 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:14.688154 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:14.706469 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:15.190429 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:15.206835 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:15.687769 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:15.706122 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:16.187991 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:16.206637 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:16.687813 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:16.706786 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:17.189533 2191137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:57:17.210780 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:17.690526 2191137 kapi.go:107] duration metric: took 2m23.006359938s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 12:57:17.705609 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:18.207304 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:18.706822 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:19.206679 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:19.706890 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:20.206590 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:20.707689 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:21.206607 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:21.706650 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:22.205952 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:22.706097 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:23.208332 2191137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:57:23.706885 2191137 kapi.go:107] duration metric: took 2m26.503862662s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 12:57:23.708750 2191137 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-102056 cluster.
	I0414 12:57:23.709963 2191137 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 12:57:23.711111 2191137 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 12:57:23.712287 2191137 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0414 12:57:23.713370 2191137 addons.go:514] duration metric: took 2m37.620183967s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher cloud-spanner amd-gpu-device-plugin inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0414 12:57:23.713419 2191137 start.go:246] waiting for cluster config update ...
	I0414 12:57:23.713443 2191137 start.go:255] writing updated cluster config ...
	I0414 12:57:23.713721 2191137 ssh_runner.go:195] Run: rm -f paused
	I0414 12:57:23.768489 2191137 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:57:23.770079 2191137 out.go:177] * Done! kubectl is now configured to use "addons-102056" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.491160458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635623491133605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=371a63bf-30ee-465e-ad34-56f89da49578 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.491873244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=517e7cfb-a3e0-47cb-9018-3250b21e4ba0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.491948353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=517e7cfb-a3e0-47cb-9018-3250b21e4ba0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.492236910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08a1c735ed126e1d2458e683685e0a900c5453c1eb39b594246432714f7732c1,PodSandboxId:7a8736e65655ef324f151cf11ae1215e69560ce46ad3915b3874b5ec22beea51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744635482557288957,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e50726f-9091-4bfc-8024-50db9a3d55cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a38c0af7b40968af769d8fe872f3c393ddc9fb08d449b318b47374aa65c841,PodSandboxId:e5d8f2afa0709fc9554ca10379f9c355da2116cf912427ce24d4f99a8af84ce2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744635453321194370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a187f3d8-a406-4837-afb8-81b2942133b1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6204e14e3c27b597d97baaa2d64631e18fe564415501d93c51497675fc4068f,PodSandboxId:e13fba84745bda475b05bdee0a6ea5bbf4f25c8ffc3b48a5a000cac13cb93fd9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744635436862060776,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-b2dn6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 95de2243-cd32-4fce-abde-211b4a01c7fb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03bb822ff3fded796d0dcf1367eff6d79e44cea56e90f8deee36c793dd0e928e,PodSandboxId:1e03411142643417b42bc0e8cae5008b4d9713c4f51a673431ae30be76bf6c69,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744635366484074075,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vwtsw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fe376df-4b79-475b-8ce2-55aa96b3115b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c3102a730c780b887ccb010cc4be2c9c8ae75d2129ea1e877d139c503484c,PodSandboxId:5e7683bac95781c73900ebd190063abce9ef641c2f14578da6bef4b282c31929,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744635366001321002,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-57rnt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d97b0ccc-3293-4726-8a1f-62630a8ee3ce,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a4ada425672eb2c1fadf7e67efc31332752f03c88c28391598bb734a008c49,PodSandboxId:42c784856c57696312ba2e1f7693282700da88f2f7f4effc8d2085ac88763cca,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744635322623874832,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bw2j9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11127e4c-4549-4659-806b-f962e161c496,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443fbc2ab5acfd9d6fc02a5e23129f3b871e0b6952ec34aea2ead9ec40e6c412,PodSandboxId:3f99c3aa292e3200d8a7abc00877b336bad9ab1445c738e84d5e54aef98aacfd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744635303723218361,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e31ba3e0-b868-4363-912d-83962ddcc09e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7024b5a65fd7aeff062b82a8c926c7ab87317c1740c2e4d7740a998b3f79dbb5,PodSandboxId:e247d14d5f15eeb2fa2d695baa88fa7add0b074cd814d8ab3d9863332ca3d7a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744635292487172691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c993ba5-23fd-439d-98f3-acbde5363ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009eed927386b1a8a432d81b078b70c2953c668a8f9646ce1dd0e6b12f7c973,PodSandboxId:fe2e032593c1d807c7dcc70e5f2b4a3c1b0e82e8e29c79496d6b77b8f5e6a0d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744635290270437253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zrwsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a09846c0-bce2-4a0f-bfdd-852a49227a49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3624092c22f6be242f2efa9ce9bbf21c8752295a28bb070632dbb
41516f262a1,PodSandboxId:60d60a38fad050ad836c882a14c9664ea41a82a756bf222291414014de559c13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744635287423736469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7vbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7777a51-e192-433d-99d9-adde9cef3add,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8abf8822dfd312a8cb574a76246f87f919796cec095aefd7d3d781700e2e8e,PodSandboxId:c95d6db5a
66ebd7131c3d1abf2d07c2c23c3417aec90b408a95fb286ddc63da1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744635276054916052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10eddbdfd59956718084ecce3d1a251a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a95c02cd4c4e57dd06105462706974c69ee019d2773958e7856ca7719f8,PodSandboxId:6261b7e2bc0f61f626a0f4d25b8277d19d5dbe9d5d706f3fa038fb98
f0cc5fd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744635276079600238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55be661b9e86c34cd6bfaa042ee81ed7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d90165d309d49d39eb818b99e47dfb635c3f23f5a7b0a68876383ad51651e2,PodSandboxId:68c5c9e830db4ddd113fdcd05e915a562aca22bdd8b048ac77c4a82d0fdd2ee0,Metadata
:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744635275999211837,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b62607de6239c0cedcebfdd86313e66,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8dcffe39465ff899cf13bd62d98000937a3352c5ca523d5b081d556770fbf2b,PodSandboxId:e4844fa6a9bf255dc953b088fa7e015f65e8d0e34835f4fe6b79eaf79537f951,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744635275945949532,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcc70f09aa8368104246a16b99e36e3,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=517e7cfb-a3e0-47cb-9018-3250b21e4ba0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.532785797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=883e123b-3caf-4221-ac09-ea721bf2319d name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.532868428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=883e123b-3caf-4221-ac09-ea721bf2319d name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.534238339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79570e8b-a738-4217-b119-d95b36264ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.536140467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635623536113870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79570e8b-a738-4217-b119-d95b36264ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.537106158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=903a6177-3840-4be8-9af5-9ea34b15554d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.537184947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=903a6177-3840-4be8-9af5-9ea34b15554d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.537825895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08a1c735ed126e1d2458e683685e0a900c5453c1eb39b594246432714f7732c1,PodSandboxId:7a8736e65655ef324f151cf11ae1215e69560ce46ad3915b3874b5ec22beea51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744635482557288957,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e50726f-9091-4bfc-8024-50db9a3d55cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a38c0af7b40968af769d8fe872f3c393ddc9fb08d449b318b47374aa65c841,PodSandboxId:e5d8f2afa0709fc9554ca10379f9c355da2116cf912427ce24d4f99a8af84ce2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744635453321194370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a187f3d8-a406-4837-afb8-81b2942133b1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6204e14e3c27b597d97baaa2d64631e18fe564415501d93c51497675fc4068f,PodSandboxId:e13fba84745bda475b05bdee0a6ea5bbf4f25c8ffc3b48a5a000cac13cb93fd9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744635436862060776,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-b2dn6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 95de2243-cd32-4fce-abde-211b4a01c7fb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03bb822ff3fded796d0dcf1367eff6d79e44cea56e90f8deee36c793dd0e928e,PodSandboxId:1e03411142643417b42bc0e8cae5008b4d9713c4f51a673431ae30be76bf6c69,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744635366484074075,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vwtsw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fe376df-4b79-475b-8ce2-55aa96b3115b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c3102a730c780b887ccb010cc4be2c9c8ae75d2129ea1e877d139c503484c,PodSandboxId:5e7683bac95781c73900ebd190063abce9ef641c2f14578da6bef4b282c31929,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744635366001321002,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-57rnt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d97b0ccc-3293-4726-8a1f-62630a8ee3ce,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a4ada425672eb2c1fadf7e67efc31332752f03c88c28391598bb734a008c49,PodSandboxId:42c784856c57696312ba2e1f7693282700da88f2f7f4effc8d2085ac88763cca,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744635322623874832,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bw2j9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11127e4c-4549-4659-806b-f962e161c496,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443fbc2ab5acfd9d6fc02a5e23129f3b871e0b6952ec34aea2ead9ec40e6c412,PodSandboxId:3f99c3aa292e3200d8a7abc00877b336bad9ab1445c738e84d5e54aef98aacfd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744635303723218361,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e31ba3e0-b868-4363-912d-83962ddcc09e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7024b5a65fd7aeff062b82a8c926c7ab87317c1740c2e4d7740a998b3f79dbb5,PodSandboxId:e247d14d5f15eeb2fa2d695baa88fa7add0b074cd814d8ab3d9863332ca3d7a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744635292487172691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c993ba5-23fd-439d-98f3-acbde5363ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009eed927386b1a8a432d81b078b70c2953c668a8f9646ce1dd0e6b12f7c973,PodSandboxId:fe2e032593c1d807c7dcc70e5f2b4a3c1b0e82e8e29c79496d6b77b8f5e6a0d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744635290270437253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zrwsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a09846c0-bce2-4a0f-bfdd-852a49227a49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3624092c22f6be242f2efa9ce9bbf21c8752295a28bb070632dbb
41516f262a1,PodSandboxId:60d60a38fad050ad836c882a14c9664ea41a82a756bf222291414014de559c13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744635287423736469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7vbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7777a51-e192-433d-99d9-adde9cef3add,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8abf8822dfd312a8cb574a76246f87f919796cec095aefd7d3d781700e2e8e,PodSandboxId:c95d6db5a
66ebd7131c3d1abf2d07c2c23c3417aec90b408a95fb286ddc63da1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744635276054916052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10eddbdfd59956718084ecce3d1a251a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a95c02cd4c4e57dd06105462706974c69ee019d2773958e7856ca7719f8,PodSandboxId:6261b7e2bc0f61f626a0f4d25b8277d19d5dbe9d5d706f3fa038fb98
f0cc5fd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744635276079600238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55be661b9e86c34cd6bfaa042ee81ed7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d90165d309d49d39eb818b99e47dfb635c3f23f5a7b0a68876383ad51651e2,PodSandboxId:68c5c9e830db4ddd113fdcd05e915a562aca22bdd8b048ac77c4a82d0fdd2ee0,Metadata
:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744635275999211837,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b62607de6239c0cedcebfdd86313e66,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8dcffe39465ff899cf13bd62d98000937a3352c5ca523d5b081d556770fbf2b,PodSandboxId:e4844fa6a9bf255dc953b088fa7e015f65e8d0e34835f4fe6b79eaf79537f951,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744635275945949532,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcc70f09aa8368104246a16b99e36e3,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=903a6177-3840-4be8-9af5-9ea34b15554d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.576204027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdf003f6-3777-4566-9ffd-fa416740e976 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.576299814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdf003f6-3777-4566-9ffd-fa416740e976 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.577295000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b24ef339-2a85-4cc7-ba6d-ccb6e907e9cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.578581292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635623578555859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b24ef339-2a85-4cc7-ba6d-ccb6e907e9cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.580031558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a5d6a51-16cd-454c-80ee-12ab366158bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.580104929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a5d6a51-16cd-454c-80ee-12ab366158bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.580415433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08a1c735ed126e1d2458e683685e0a900c5453c1eb39b594246432714f7732c1,PodSandboxId:7a8736e65655ef324f151cf11ae1215e69560ce46ad3915b3874b5ec22beea51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744635482557288957,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e50726f-9091-4bfc-8024-50db9a3d55cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a38c0af7b40968af769d8fe872f3c393ddc9fb08d449b318b47374aa65c841,PodSandboxId:e5d8f2afa0709fc9554ca10379f9c355da2116cf912427ce24d4f99a8af84ce2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744635453321194370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a187f3d8-a406-4837-afb8-81b2942133b1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6204e14e3c27b597d97baaa2d64631e18fe564415501d93c51497675fc4068f,PodSandboxId:e13fba84745bda475b05bdee0a6ea5bbf4f25c8ffc3b48a5a000cac13cb93fd9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744635436862060776,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-b2dn6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 95de2243-cd32-4fce-abde-211b4a01c7fb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03bb822ff3fded796d0dcf1367eff6d79e44cea56e90f8deee36c793dd0e928e,PodSandboxId:1e03411142643417b42bc0e8cae5008b4d9713c4f51a673431ae30be76bf6c69,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744635366484074075,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vwtsw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fe376df-4b79-475b-8ce2-55aa96b3115b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c3102a730c780b887ccb010cc4be2c9c8ae75d2129ea1e877d139c503484c,PodSandboxId:5e7683bac95781c73900ebd190063abce9ef641c2f14578da6bef4b282c31929,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744635366001321002,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-57rnt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d97b0ccc-3293-4726-8a1f-62630a8ee3ce,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a4ada425672eb2c1fadf7e67efc31332752f03c88c28391598bb734a008c49,PodSandboxId:42c784856c57696312ba2e1f7693282700da88f2f7f4effc8d2085ac88763cca,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744635322623874832,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bw2j9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11127e4c-4549-4659-806b-f962e161c496,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443fbc2ab5acfd9d6fc02a5e23129f3b871e0b6952ec34aea2ead9ec40e6c412,PodSandboxId:3f99c3aa292e3200d8a7abc00877b336bad9ab1445c738e84d5e54aef98aacfd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744635303723218361,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e31ba3e0-b868-4363-912d-83962ddcc09e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7024b5a65fd7aeff062b82a8c926c7ab87317c1740c2e4d7740a998b3f79dbb5,PodSandboxId:e247d14d5f15eeb2fa2d695baa88fa7add0b074cd814d8ab3d9863332ca3d7a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744635292487172691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c993ba5-23fd-439d-98f3-acbde5363ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009eed927386b1a8a432d81b078b70c2953c668a8f9646ce1dd0e6b12f7c973,PodSandboxId:fe2e032593c1d807c7dcc70e5f2b4a3c1b0e82e8e29c79496d6b77b8f5e6a0d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744635290270437253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zrwsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a09846c0-bce2-4a0f-bfdd-852a49227a49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3624092c22f6be242f2efa9ce9bbf21c8752295a28bb070632dbb
41516f262a1,PodSandboxId:60d60a38fad050ad836c882a14c9664ea41a82a756bf222291414014de559c13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744635287423736469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7vbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7777a51-e192-433d-99d9-adde9cef3add,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8abf8822dfd312a8cb574a76246f87f919796cec095aefd7d3d781700e2e8e,PodSandboxId:c95d6db5a
66ebd7131c3d1abf2d07c2c23c3417aec90b408a95fb286ddc63da1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744635276054916052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10eddbdfd59956718084ecce3d1a251a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a95c02cd4c4e57dd06105462706974c69ee019d2773958e7856ca7719f8,PodSandboxId:6261b7e2bc0f61f626a0f4d25b8277d19d5dbe9d5d706f3fa038fb98
f0cc5fd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744635276079600238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55be661b9e86c34cd6bfaa042ee81ed7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d90165d309d49d39eb818b99e47dfb635c3f23f5a7b0a68876383ad51651e2,PodSandboxId:68c5c9e830db4ddd113fdcd05e915a562aca22bdd8b048ac77c4a82d0fdd2ee0,Metadata
:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744635275999211837,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b62607de6239c0cedcebfdd86313e66,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8dcffe39465ff899cf13bd62d98000937a3352c5ca523d5b081d556770fbf2b,PodSandboxId:e4844fa6a9bf255dc953b088fa7e015f65e8d0e34835f4fe6b79eaf79537f951,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744635275945949532,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcc70f09aa8368104246a16b99e36e3,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a5d6a51-16cd-454c-80ee-12ab366158bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.616170103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eb9f9f7-ae7f-4094-a1e9-46b9a0d72e21 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.616267850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eb9f9f7-ae7f-4094-a1e9-46b9a0d72e21 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.617941465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3b9af3a-e678-4481-af62-d174ccff8d7d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.619157946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635623619128846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3b9af3a-e678-4481-af62-d174ccff8d7d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.619731221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bae0c112-0142-4bc6-ae7e-a2fc1ee0ddd5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.619788997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bae0c112-0142-4bc6-ae7e-a2fc1ee0ddd5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:00:23 addons-102056 crio[666]: time="2025-04-14 13:00:23.620086045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08a1c735ed126e1d2458e683685e0a900c5453c1eb39b594246432714f7732c1,PodSandboxId:7a8736e65655ef324f151cf11ae1215e69560ce46ad3915b3874b5ec22beea51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744635482557288957,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e50726f-9091-4bfc-8024-50db9a3d55cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a38c0af7b40968af769d8fe872f3c393ddc9fb08d449b318b47374aa65c841,PodSandboxId:e5d8f2afa0709fc9554ca10379f9c355da2116cf912427ce24d4f99a8af84ce2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744635453321194370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a187f3d8-a406-4837-afb8-81b2942133b1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6204e14e3c27b597d97baaa2d64631e18fe564415501d93c51497675fc4068f,PodSandboxId:e13fba84745bda475b05bdee0a6ea5bbf4f25c8ffc3b48a5a000cac13cb93fd9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744635436862060776,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-b2dn6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 95de2243-cd32-4fce-abde-211b4a01c7fb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03bb822ff3fded796d0dcf1367eff6d79e44cea56e90f8deee36c793dd0e928e,PodSandboxId:1e03411142643417b42bc0e8cae5008b4d9713c4f51a673431ae30be76bf6c69,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744635366484074075,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vwtsw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fe376df-4b79-475b-8ce2-55aa96b3115b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c3102a730c780b887ccb010cc4be2c9c8ae75d2129ea1e877d139c503484c,PodSandboxId:5e7683bac95781c73900ebd190063abce9ef641c2f14578da6bef4b282c31929,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744635366001321002,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-57rnt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d97b0ccc-3293-4726-8a1f-62630a8ee3ce,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a4ada425672eb2c1fadf7e67efc31332752f03c88c28391598bb734a008c49,PodSandboxId:42c784856c57696312ba2e1f7693282700da88f2f7f4effc8d2085ac88763cca,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744635322623874832,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bw2j9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11127e4c-4549-4659-806b-f962e161c496,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443fbc2ab5acfd9d6fc02a5e23129f3b871e0b6952ec34aea2ead9ec40e6c412,PodSandboxId:3f99c3aa292e3200d8a7abc00877b336bad9ab1445c738e84d5e54aef98aacfd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744635303723218361,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e31ba3e0-b868-4363-912d-83962ddcc09e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7024b5a65fd7aeff062b82a8c926c7ab87317c1740c2e4d7740a998b3f79dbb5,PodSandboxId:e247d14d5f15eeb2fa2d695baa88fa7add0b074cd814d8ab3d9863332ca3d7a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744635292487172691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c993ba5-23fd-439d-98f3-acbde5363ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009eed927386b1a8a432d81b078b70c2953c668a8f9646ce1dd0e6b12f7c973,PodSandboxId:fe2e032593c1d807c7dcc70e5f2b4a3c1b0e82e8e29c79496d6b77b8f5e6a0d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744635290270437253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zrwsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a09846c0-bce2-4a0f-bfdd-852a49227a49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3624092c22f6be242f2efa9ce9bbf21c8752295a28bb070632dbb
41516f262a1,PodSandboxId:60d60a38fad050ad836c882a14c9664ea41a82a756bf222291414014de559c13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744635287423736469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7vbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7777a51-e192-433d-99d9-adde9cef3add,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8abf8822dfd312a8cb574a76246f87f919796cec095aefd7d3d781700e2e8e,PodSandboxId:c95d6db5a
66ebd7131c3d1abf2d07c2c23c3417aec90b408a95fb286ddc63da1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744635276054916052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10eddbdfd59956718084ecce3d1a251a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a95c02cd4c4e57dd06105462706974c69ee019d2773958e7856ca7719f8,PodSandboxId:6261b7e2bc0f61f626a0f4d25b8277d19d5dbe9d5d706f3fa038fb98
f0cc5fd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744635276079600238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55be661b9e86c34cd6bfaa042ee81ed7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d90165d309d49d39eb818b99e47dfb635c3f23f5a7b0a68876383ad51651e2,PodSandboxId:68c5c9e830db4ddd113fdcd05e915a562aca22bdd8b048ac77c4a82d0fdd2ee0,Metadata
:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744635275999211837,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b62607de6239c0cedcebfdd86313e66,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8dcffe39465ff899cf13bd62d98000937a3352c5ca523d5b081d556770fbf2b,PodSandboxId:e4844fa6a9bf255dc953b088fa7e015f65e8d0e34835f4fe6b79eaf79537f951,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744635275945949532,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcc70f09aa8368104246a16b99e36e3,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bae0c112-0142-4bc6-ae7e-a2fc1ee0ddd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08a1c735ed126       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   7a8736e65655e       nginx
	42a38c0af7b40       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   e5d8f2afa0709       busybox
	f6204e14e3c27       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e13fba84745bd       ingress-nginx-controller-56d7c84fd4-b2dn6
	03bb822ff3fde       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   1e03411142643       ingress-nginx-admission-patch-vwtsw
	fd8c3102a730c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   5e7683bac9578       ingress-nginx-admission-create-57rnt
	27a4ada425672       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   42c784856c576       amd-gpu-device-plugin-bw2j9
	443fbc2ab5acf       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   3f99c3aa292e3       kube-ingress-dns-minikube
	7024b5a65fd7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   e247d14d5f15e       storage-provisioner
	1009eed927386       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   fe2e032593c1d       coredns-668d6bf9bc-zrwsv
	3624092c22f6b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             5 minutes ago       Running             kube-proxy                0                   60d60a38fad05       kube-proxy-f7vbt
	ef7f7a95c02cd       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   6261b7e2bc0f6       kube-scheduler-addons-102056
	cf8abf8822dfd       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   c95d6db5a66eb       etcd-addons-102056
	87d90165d309d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   68c5c9e830db4       kube-apiserver-addons-102056
	d8dcffe39465f       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   e4844fa6a9bf2       kube-controller-manager-addons-102056
	
	
	==> coredns [1009eed927386b1a8a432d81b078b70c2953c668a8f9646ce1dd0e6b12f7c973] <==
	[INFO] 10.244.0.7:49908 - 32653 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000485055s
	[INFO] 10.244.0.7:49908 - 26934 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000154149s
	[INFO] 10.244.0.7:49908 - 54712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000098612s
	[INFO] 10.244.0.7:49908 - 38975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000096068s
	[INFO] 10.244.0.7:49908 - 53778 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066992s
	[INFO] 10.244.0.7:49908 - 11835 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000137709s
	[INFO] 10.244.0.7:49908 - 27375 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000111527s
	[INFO] 10.244.0.7:46212 - 40217 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000272072s
	[INFO] 10.244.0.7:46212 - 39934 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000333171s
	[INFO] 10.244.0.7:59982 - 27907 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076955s
	[INFO] 10.244.0.7:59982 - 27655 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005715s
	[INFO] 10.244.0.7:49642 - 13955 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063783s
	[INFO] 10.244.0.7:49642 - 13751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049209s
	[INFO] 10.244.0.7:33303 - 35235 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067577s
	[INFO] 10.244.0.7:33303 - 35376 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010083s
	[INFO] 10.244.0.23:45492 - 32266 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000553778s
	[INFO] 10.244.0.23:53190 - 53990 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133348s
	[INFO] 10.244.0.23:35294 - 8687 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139707s
	[INFO] 10.244.0.23:46365 - 63701 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135196s
	[INFO] 10.244.0.23:53557 - 47425 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128769s
	[INFO] 10.244.0.23:58618 - 31268 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107392s
	[INFO] 10.244.0.23:51422 - 31945 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004298554s
	[INFO] 10.244.0.23:51553 - 42094 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003923748s
	[INFO] 10.244.0.27:49252 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000317545s
	[INFO] 10.244.0.27:52148 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125049s
	
	
	==> describe nodes <==
	Name:               addons-102056
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-102056
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88
	                    minikube.k8s.io/name=addons-102056
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_54_41_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-102056
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:54:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-102056
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 13:00:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:58:45 +0000   Mon, 14 Apr 2025 12:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:58:45 +0000   Mon, 14 Apr 2025 12:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:58:45 +0000   Mon, 14 Apr 2025 12:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:58:45 +0000   Mon, 14 Apr 2025 12:54:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    addons-102056
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 d260c83df83a4aa2b3a3b95c8f3dcd93
	  System UUID:                d260c83d-f83a-4aa2-b3a3-b95c8f3dcd93
	  Boot ID:                    8534fddc-d38a-454b-a2ed-7779603c4a4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     hello-world-app-7d9564db4-pm4gh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-b2dn6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m29s
	  kube-system                 amd-gpu-device-plugin-bw2j9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 coredns-668d6bf9bc-zrwsv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m37s
	  kube-system                 etcd-addons-102056                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m42s
	  kube-system                 kube-apiserver-addons-102056                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-controller-manager-addons-102056        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-f7vbt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-scheduler-addons-102056                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node addons-102056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node addons-102056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node addons-102056 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s                  kubelet          Node addons-102056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s                  kubelet          Node addons-102056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s                  kubelet          Node addons-102056 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m41s                  kubelet          Node addons-102056 status is now: NodeReady
	  Normal  RegisteredNode           5m38s                  node-controller  Node addons-102056 event: Registered Node addons-102056 in Controller
	
	
	==> dmesg <==
	[  +0.056395] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.982260] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.077298] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.281032] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.142296] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.066410] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.070485] kauditd_printk_skb: 153 callbacks suppressed
	[Apr14 12:55] kauditd_printk_skb: 64 callbacks suppressed
	[ +35.870902] kauditd_printk_skb: 2 callbacks suppressed
	[ +15.775262] kauditd_printk_skb: 4 callbacks suppressed
	[Apr14 12:56] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.808662] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.926841] kauditd_printk_skb: 28 callbacks suppressed
	[Apr14 12:57] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.730561] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.685205] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.487315] kauditd_printk_skb: 6 callbacks suppressed
	[Apr14 12:58] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.653973] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.630501] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.294835] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.007276] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.484930] kauditd_printk_skb: 4 callbacks suppressed
	[Apr14 12:59] kauditd_printk_skb: 7 callbacks suppressed
	[Apr14 13:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [cf8abf8822dfd312a8cb574a76246f87f919796cec095aefd7d3d781700e2e8e] <==
	{"level":"warn","ts":"2025-04-14T12:56:03.511041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.534122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:56:03.512078Z","caller":"traceutil/trace.go:171","msg":"trace[319863926] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1012; }","duration":"124.605717ms","start":"2025-04-14T12:56:03.387467Z","end":"2025-04-14T12:56:03.512073Z","steps":["trace[319863926] 'agreement among raft nodes before linearized reading'  (duration: 123.550646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:56:03.511060Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.541616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-04-14T12:56:03.511073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.878805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:56:03.512514Z","caller":"traceutil/trace.go:171","msg":"trace[868239775] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1012; }","duration":"227.331899ms","start":"2025-04-14T12:56:03.285173Z","end":"2025-04-14T12:56:03.512505Z","steps":["trace[868239775] 'agreement among raft nodes before linearized reading'  (duration: 225.892533ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:56:03.513811Z","caller":"traceutil/trace.go:171","msg":"trace[1091046336] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1012; }","duration":"201.304453ms","start":"2025-04-14T12:56:03.312496Z","end":"2025-04-14T12:56:03.513801Z","steps":["trace[1091046336] 'agreement among raft nodes before linearized reading'  (duration: 198.554125ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:56:16.665776Z","caller":"traceutil/trace.go:171","msg":"trace[102200879] linearizableReadLoop","detail":"{readStateIndex:1155; appliedIndex:1154; }","duration":"382.500721ms","start":"2025-04-14T12:56:16.283258Z","end":"2025-04-14T12:56:16.665759Z","steps":["trace[102200879] 'read index received'  (duration: 382.270651ms)","trace[102200879] 'applied index is now lower than readState.Index'  (duration: 229.726µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T12:56:16.665875Z","caller":"traceutil/trace.go:171","msg":"trace[719479336] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"435.02268ms","start":"2025-04-14T12:56:16.230845Z","end":"2025-04-14T12:56:16.665868Z","steps":["trace[719479336] 'process raft request'  (duration: 434.732275ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:56:16.665955Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:56:16.230827Z","time spent":"435.063236ms","remote":"127.0.0.1:55256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1111 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-04-14T12:56:16.666044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.785952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:56:16.666064Z","caller":"traceutil/trace.go:171","msg":"trace[502680869] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1121; }","duration":"382.822686ms","start":"2025-04-14T12:56:16.283234Z","end":"2025-04-14T12:56:16.666056Z","steps":["trace[502680869] 'agreement among raft nodes before linearized reading'  (duration: 382.79446ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:56:16.666083Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:56:16.283201Z","time spent":"382.876401ms","remote":"127.0.0.1:55276","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-04-14T12:56:16.666006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.096641ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:56:16.666153Z","caller":"traceutil/trace.go:171","msg":"trace[1821663504] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1121; }","duration":"141.249939ms","start":"2025-04-14T12:56:16.524895Z","end":"2025-04-14T12:56:16.666145Z","steps":["trace[1821663504] 'agreement among raft nodes before linearized reading'  (duration: 141.077909ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:57:55.088909Z","caller":"traceutil/trace.go:171","msg":"trace[1713401743] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"117.619486ms","start":"2025-04-14T12:57:54.971267Z","end":"2025-04-14T12:57:55.088886Z","steps":["trace[1713401743] 'read index received'  (duration: 117.470289ms)","trace[1713401743] 'applied index is now lower than readState.Index'  (duration: 148.813µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T12:57:55.089134Z","caller":"traceutil/trace.go:171","msg":"trace[597936513] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"248.059241ms","start":"2025-04-14T12:57:54.841063Z","end":"2025-04-14T12:57:55.089122Z","steps":["trace[597936513] 'process raft request'  (duration: 247.712593ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:57:55.089149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.798614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T12:57:55.089225Z","caller":"traceutil/trace.go:171","msg":"trace[221935814] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1400; }","duration":"117.975892ms","start":"2025-04-14T12:57:54.971243Z","end":"2025-04-14T12:57:55.089218Z","steps":["trace[221935814] 'agreement among raft nodes before linearized reading'  (duration: 117.802975ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:58:01.605025Z","caller":"traceutil/trace.go:171","msg":"trace[326052399] linearizableReadLoop","detail":"{readStateIndex:1546; appliedIndex:1545; }","duration":"103.773999ms","start":"2025-04-14T12:58:01.501238Z","end":"2025-04-14T12:58:01.605012Z","steps":["trace[326052399] 'read index received'  (duration: 103.62419ms)","trace[326052399] 'applied index is now lower than readState.Index'  (duration: 149.364µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T12:58:01.605355Z","caller":"traceutil/trace.go:171","msg":"trace[1460715591] transaction","detail":"{read_only:false; response_revision:1486; number_of_response:1; }","duration":"137.57806ms","start":"2025-04-14T12:58:01.467766Z","end":"2025-04-14T12:58:01.605344Z","steps":["trace[1460715591] 'process raft request'  (duration: 137.136131ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:58:01.605550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.313753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:58:01.605572Z","caller":"traceutil/trace.go:171","msg":"trace[1896977151] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1486; }","duration":"104.371125ms","start":"2025-04-14T12:58:01.501194Z","end":"2025-04-14T12:58:01.605566Z","steps":["trace[1896977151] 'agreement among raft nodes before linearized reading'  (duration: 104.304213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:58:33.767023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.152688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:58:33.767173Z","caller":"traceutil/trace.go:171","msg":"trace[889068382] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1719; }","duration":"124.32905ms","start":"2025-04-14T12:58:33.642830Z","end":"2025-04-14T12:58:33.767159Z","steps":["trace[889068382] 'count revisions from in-memory index tree'  (duration: 124.104016ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:58:41.950315Z","caller":"traceutil/trace.go:171","msg":"trace[611235997] transaction","detail":"{read_only:false; response_revision:1735; number_of_response:1; }","duration":"137.033732ms","start":"2025-04-14T12:58:41.813267Z","end":"2025-04-14T12:58:41.950300Z","steps":["trace[611235997] 'process raft request'  (duration: 136.911728ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:00:23 up 6 min,  0 users,  load average: 0.96, 1.08, 0.59
	Linux addons-102056 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [87d90165d309d49d39eb818b99e47dfb635c3f23f5a7b0a68876383ad51651e2] <==
	I0414 12:55:51.336932       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 12:57:39.885168       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:40806: use of closed network connection
	E0414 12:57:40.074444       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8443->192.168.39.1:40826: use of closed network connection
	I0414 12:57:49.289158       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.206.156"}
	I0414 12:57:55.413912       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0414 12:57:56.295254       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 12:57:56.493811       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.80.155"}
	W0414 12:57:56.507121       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0414 12:58:38.044250       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0414 12:58:42.490298       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 12:58:52.318951       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 12:59:14.836696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:59:14.836742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:59:14.869438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:59:14.869505       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:59:14.873863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:59:14.873910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:59:14.888348       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:59:14.888450       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:59:14.910543       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:59:14.910878       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 12:59:15.874052       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 12:59:15.912362       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 12:59:16.020804       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0414 13:00:22.421156       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.104.24"}
	
	
	==> kube-controller-manager [d8dcffe39465ff899cf13bd62d98000937a3352c5ca523d5b081d556770fbf2b] <==
	W0414 12:59:32.333602       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:59:32.333750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:59:46.254528       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:59:46.255925       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 12:59:46.256805       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:59:46.256847       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:59:48.352921       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:59:48.353940       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 12:59:48.354755       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:59:48.354811       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:59:49.794718       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:59:49.795897       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 12:59:49.796950       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:59:49.797029       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:59:53.925478       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:59:53.926854       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 12:59:53.927977       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:59:53.928094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 13:00:22.014876       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 13:00:22.016209       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 13:00:22.017164       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 13:00:22.017263       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 13:00:22.252345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="60.091258ms"
	I0414 13:00:22.265528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.051585ms"
	I0414 13:00:22.265716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="92.357µs"
	
	
	==> kube-proxy [3624092c22f6be242f2efa9ce9bbf21c8752295a28bb070632dbb41516f262a1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:54:48.343330       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:54:48.407346       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.15"]
	E0414 12:54:48.407430       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:54:48.550331       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:54:48.550382       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:54:48.550405       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:54:48.582521       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:54:48.582888       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:54:48.582901       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:54:48.588911       1 config.go:199] "Starting service config controller"
	I0414 12:54:48.588954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:54:48.588996       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:54:48.589001       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:54:48.589481       1 config.go:329] "Starting node config controller"
	I0414 12:54:48.589487       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:54:48.689541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:54:48.689585       1 shared_informer.go:320] Caches are synced for node config
	I0414 12:54:48.689596       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ef7f7a95c02cd4c4e57dd06105462706974c69ee019d2773958e7856ca7719f8] <==
	W0414 12:54:38.543779       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 12:54:38.546562       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:38.543812       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 12:54:38.546622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.390560       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 12:54:39.390591       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.401377       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 12:54:39.401562       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.451721       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 12:54:39.451841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.508533       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 12:54:39.508642       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.621961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 12:54:39.622051       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.627334       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 12:54:39.627434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.651876       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 12:54:39.651925       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 12:54:39.692795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 12:54:39.692874       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.731613       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 12:54:39.731707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:54:39.752852       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0414 12:54:39.753290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0414 12:54:42.238043       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:59:41 addons-102056 kubelet[1225]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 12:59:41 addons-102056 kubelet[1225]: E0414 12:59:41.505405    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635581505011274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:59:41 addons-102056 kubelet[1225]: E0414 12:59:41.505440    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635581505011274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:59:51 addons-102056 kubelet[1225]: E0414 12:59:51.508475    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635591508001427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:59:51 addons-102056 kubelet[1225]: E0414 12:59:51.508530    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635591508001427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:01 addons-102056 kubelet[1225]: E0414 13:00:01.511229    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635601510624628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:01 addons-102056 kubelet[1225]: E0414 13:00:01.511319    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635601510624628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:11 addons-102056 kubelet[1225]: E0414 13:00:11.513434    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635611513121235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:11 addons-102056 kubelet[1225]: E0414 13:00:11.513487    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635611513121235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:19 addons-102056 kubelet[1225]: I0414 13:00:19.151950    1225 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bw2j9" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 13:00:21 addons-102056 kubelet[1225]: E0414 13:00:21.516342    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635621515836401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:21 addons-102056 kubelet[1225]: E0414 13:00:21.516640    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744635621515836401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594738,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.152085    1225 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238466    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="liveness-probe"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238622    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="csi-provisioner"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238724    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="node-driver-registrar"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238762    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="e6d766c4-de0e-4533-8c82-626dba416245" containerName="volume-snapshot-controller"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238795    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="d285642f-bb79-4e03-bdaf-62a3e8c464ee" containerName="csi-resizer"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238828    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="a7e46af2-ae66-4358-b776-728fcdb77c91" containerName="csi-attacher"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238882    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="csi-snapshotter"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238916    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="6587aa05-4548-4e44-b383-34f03a7100ab" containerName="volume-snapshot-controller"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238949    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="csi-external-health-monitor-controller"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.238988    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8522d0-ea69-40e4-b758-fc2a38b768b6" containerName="hostpath"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.239021    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="46920e6e-9e40-48de-b16c-da16be58802c" containerName="task-pv-container"
	Apr 14 13:00:22 addons-102056 kubelet[1225]: I0414 13:00:22.350614    1225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8896z\" (UniqueName: \"kubernetes.io/projected/811d26f1-c345-4723-a3fe-18d089c74e6e-kube-api-access-8896z\") pod \"hello-world-app-7d9564db4-pm4gh\" (UID: \"811d26f1-c345-4723-a3fe-18d089c74e6e\") " pod="default/hello-world-app-7d9564db4-pm4gh"
	
	
	==> storage-provisioner [7024b5a65fd7aeff062b82a8c926c7ab87317c1740c2e4d7740a998b3f79dbb5] <==
	I0414 12:54:53.874603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:54:53.950160       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:54:53.950218       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 12:54:53.996220       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 12:54:53.996385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-102056_d11b6a53-91e3-44c5-b831-1baf2abd6ab0!
	I0414 12:54:53.997388       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93c140cc-0152-4e89-b934-08fbd8b0ee01", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-102056_d11b6a53-91e3-44c5-b831-1baf2abd6ab0 became leader
	I0414 12:54:54.125738       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-102056_d11b6a53-91e3-44c5-b831-1baf2abd6ab0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-102056 -n addons-102056
helpers_test.go:261: (dbg) Run:  kubectl --context addons-102056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-pm4gh ingress-nginx-admission-create-57rnt ingress-nginx-admission-patch-vwtsw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-102056 describe pod hello-world-app-7d9564db4-pm4gh ingress-nginx-admission-create-57rnt ingress-nginx-admission-patch-vwtsw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-102056 describe pod hello-world-app-7d9564db4-pm4gh ingress-nginx-admission-create-57rnt ingress-nginx-admission-patch-vwtsw: exit status 1 (64.049134ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-pm4gh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-102056/192.168.39.15
	Start Time:       Mon, 14 Apr 2025 13:00:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8896z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8896z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-pm4gh to addons-102056
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-57rnt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vwtsw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-102056 describe pod hello-world-app-7d9564db4-pm4gh ingress-nginx-admission-create-57rnt ingress-nginx-admission-patch-vwtsw: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable ingress-dns --alsologtostderr -v=1: (1.831661315s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable ingress --alsologtostderr -v=1: (7.701362369s)
--- FAIL: TestAddons/parallel/Ingress (158.32s)

                                                
                                    
x
+
TestPreload (208.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-944524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0414 13:50:27.987256 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-944524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m54.034370915s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-944524 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-944524 image pull gcr.io/k8s-minikube/busybox: (7.016524442s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-944524
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-944524: (7.304927004s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-944524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0414 13:52:08.849353 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:25.776158 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-944524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.421270393s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-944524 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-14 13:52:43.187286774 +0000 UTC m=+3574.992043501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-944524 -n test-preload-944524
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-944524 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-944524 logs -n 25: (1.085255756s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-190122 ssh -n                                                                 | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | multinode-190122-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-190122 ssh -n multinode-190122 sudo cat                                       | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | /home/docker/cp-test_multinode-190122-m03_multinode-190122.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-190122 cp multinode-190122-m03:/home/docker/cp-test.txt                       | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | multinode-190122-m02:/home/docker/cp-test_multinode-190122-m03_multinode-190122-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-190122 ssh -n                                                                 | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | multinode-190122-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-190122 ssh -n multinode-190122-m02 sudo cat                                   | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | /home/docker/cp-test_multinode-190122-m03_multinode-190122-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-190122 node stop m03                                                          | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	| node    | multinode-190122 node start                                                             | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:37 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-190122                                                                | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:37 UTC |                     |
	| stop    | -p multinode-190122                                                                     | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:37 UTC | 14 Apr 25 13:40 UTC |
	| start   | -p multinode-190122                                                                     | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:40 UTC | 14 Apr 25 13:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-190122                                                                | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:42 UTC |                     |
	| node    | multinode-190122 node delete                                                            | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:42 UTC | 14 Apr 25 13:42 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-190122 stop                                                                   | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:42 UTC | 14 Apr 25 13:45 UTC |
	| start   | -p multinode-190122                                                                     | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC | 14 Apr 25 13:48 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-190122                                                                | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:48 UTC |                     |
	| start   | -p multinode-190122-m02                                                                 | multinode-190122-m02 | jenkins | v1.35.0 | 14 Apr 25 13:48 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-190122-m03                                                                 | multinode-190122-m03 | jenkins | v1.35.0 | 14 Apr 25 13:48 UTC | 14 Apr 25 13:49 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-190122                                                                 | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:49 UTC |                     |
	| delete  | -p multinode-190122-m03                                                                 | multinode-190122-m03 | jenkins | v1.35.0 | 14 Apr 25 13:49 UTC | 14 Apr 25 13:49 UTC |
	| delete  | -p multinode-190122                                                                     | multinode-190122     | jenkins | v1.35.0 | 14 Apr 25 13:49 UTC | 14 Apr 25 13:49 UTC |
	| start   | -p test-preload-944524                                                                  | test-preload-944524  | jenkins | v1.35.0 | 14 Apr 25 13:49 UTC | 14 Apr 25 13:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-944524 image pull                                                          | test-preload-944524  | jenkins | v1.35.0 | 14 Apr 25 13:51 UTC | 14 Apr 25 13:51 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-944524                                                                  | test-preload-944524  | jenkins | v1.35.0 | 14 Apr 25 13:51 UTC | 14 Apr 25 13:51 UTC |
	| start   | -p test-preload-944524                                                                  | test-preload-944524  | jenkins | v1.35.0 | 14 Apr 25 13:51 UTC | 14 Apr 25 13:52 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-944524 image list                                                          | test-preload-944524  | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:51:25
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:51:25.588384 2222437 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:51:25.588640 2222437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:51:25.588648 2222437 out.go:358] Setting ErrFile to fd 2...
	I0414 13:51:25.588653 2222437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:51:25.588854 2222437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:51:25.589364 2222437 out.go:352] Setting JSON to false
	I0414 13:51:25.590275 2222437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":167625,"bootTime":1744471061,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:51:25.590385 2222437 start.go:139] virtualization: kvm guest
	I0414 13:51:25.592310 2222437 out.go:177] * [test-preload-944524] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:51:25.593502 2222437 notify.go:220] Checking for updates...
	I0414 13:51:25.593521 2222437 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 13:51:25.594694 2222437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:51:25.595899 2222437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:51:25.597018 2222437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:51:25.598176 2222437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:51:25.599302 2222437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:51:25.600636 2222437 config.go:182] Loaded profile config "test-preload-944524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:51:25.601116 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:25.601196 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:25.616237 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0414 13:51:25.616692 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:25.617261 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:51:25.617286 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:25.617756 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:25.617968 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:51:25.619518 2222437 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 13:51:25.620744 2222437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:51:25.621069 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:25.621123 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:25.635872 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0414 13:51:25.636306 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:25.636694 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:51:25.636715 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:25.637103 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:25.637293 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:51:25.671950 2222437 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 13:51:25.673179 2222437 start.go:297] selected driver: kvm2
	I0414 13:51:25.673193 2222437 start.go:901] validating driver "kvm2" against &{Name:test-preload-944524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-944524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:51:25.673309 2222437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:51:25.674027 2222437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:51:25.674120 2222437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:51:25.688795 2222437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:51:25.689209 2222437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:51:25.689253 2222437 cni.go:84] Creating CNI manager for ""
	I0414 13:51:25.689300 2222437 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:51:25.689346 2222437 start.go:340] cluster config:
	{Name:test-preload-944524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-944524 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:51:25.689436 2222437 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:51:25.691924 2222437 out.go:177] * Starting "test-preload-944524" primary control-plane node in "test-preload-944524" cluster
	I0414 13:51:25.692877 2222437 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:51:26.412202 2222437 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 13:51:26.412239 2222437 cache.go:56] Caching tarball of preloaded images
	I0414 13:51:26.412455 2222437 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:51:26.414088 2222437 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0414 13:51:26.415179 2222437 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:51:26.575440 2222437 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 13:51:44.198969 2222437 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:51:44.199100 2222437 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:51:45.072390 2222437 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0414 13:51:45.072560 2222437 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/config.json ...
	I0414 13:51:45.072870 2222437 start.go:360] acquireMachinesLock for test-preload-944524: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:51:45.072966 2222437 start.go:364] duration metric: took 65.984µs to acquireMachinesLock for "test-preload-944524"
	I0414 13:51:45.072988 2222437 start.go:96] Skipping create...Using existing machine configuration
	I0414 13:51:45.072999 2222437 fix.go:54] fixHost starting: 
	I0414 13:51:45.073329 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:45.073377 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:45.088446 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0414 13:51:45.088946 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:45.089531 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:51:45.089561 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:45.090012 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:45.090231 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:51:45.090396 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetState
	I0414 13:51:45.092382 2222437 fix.go:112] recreateIfNeeded on test-preload-944524: state=Stopped err=<nil>
	I0414 13:51:45.092425 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	W0414 13:51:45.092589 2222437 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 13:51:45.094438 2222437 out.go:177] * Restarting existing kvm2 VM for "test-preload-944524" ...
	I0414 13:51:45.095610 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Start
	I0414 13:51:45.095810 2222437 main.go:141] libmachine: (test-preload-944524) starting domain...
	I0414 13:51:45.095838 2222437 main.go:141] libmachine: (test-preload-944524) ensuring networks are active...
	I0414 13:51:45.096536 2222437 main.go:141] libmachine: (test-preload-944524) Ensuring network default is active
	I0414 13:51:45.096913 2222437 main.go:141] libmachine: (test-preload-944524) Ensuring network mk-test-preload-944524 is active
	I0414 13:51:45.097308 2222437 main.go:141] libmachine: (test-preload-944524) getting domain XML...
	I0414 13:51:45.097994 2222437 main.go:141] libmachine: (test-preload-944524) creating domain...
	I0414 13:51:46.493543 2222437 main.go:141] libmachine: (test-preload-944524) waiting for IP...
	I0414 13:51:46.494467 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:46.494818 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:46.494938 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:46.494833 2222537 retry.go:31] will retry after 200.526952ms: waiting for domain to come up
	I0414 13:51:46.697393 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:46.697890 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:46.697924 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:46.697854 2222537 retry.go:31] will retry after 317.974687ms: waiting for domain to come up
	I0414 13:51:47.017465 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:47.017879 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:47.017914 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:47.017794 2222537 retry.go:31] will retry after 354.296815ms: waiting for domain to come up
	I0414 13:51:47.373420 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:47.373846 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:47.373876 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:47.373811 2222537 retry.go:31] will retry after 557.822786ms: waiting for domain to come up
	I0414 13:51:47.933871 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:47.934361 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:47.934390 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:47.934297 2222537 retry.go:31] will retry after 598.112456ms: waiting for domain to come up
	I0414 13:51:48.534312 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:48.534734 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:48.534760 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:48.534696 2222537 retry.go:31] will retry after 622.909373ms: waiting for domain to come up
	I0414 13:51:49.159746 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:49.160120 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:49.160191 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:49.160094 2222537 retry.go:31] will retry after 1.07684497s: waiting for domain to come up
	I0414 13:51:50.238932 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:50.239453 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:50.239485 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:50.239421 2222537 retry.go:31] will retry after 1.213582003s: waiting for domain to come up
	I0414 13:51:51.454949 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:51.455423 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:51.455446 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:51.455386 2222537 retry.go:31] will retry after 1.741015617s: waiting for domain to come up
	I0414 13:51:53.198274 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:53.198770 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:53.198797 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:53.198729 2222537 retry.go:31] will retry after 2.27802162s: waiting for domain to come up
	I0414 13:51:55.478595 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:55.479069 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:55.479136 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:55.479063 2222537 retry.go:31] will retry after 2.365269259s: waiting for domain to come up
	I0414 13:51:57.847673 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:51:57.848096 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:51:57.848126 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:51:57.848059 2222537 retry.go:31] will retry after 3.311922651s: waiting for domain to come up
	I0414 13:52:01.162018 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:01.162466 2222437 main.go:141] libmachine: (test-preload-944524) DBG | unable to find current IP address of domain test-preload-944524 in network mk-test-preload-944524
	I0414 13:52:01.162497 2222437 main.go:141] libmachine: (test-preload-944524) DBG | I0414 13:52:01.162425 2222537 retry.go:31] will retry after 2.752272688s: waiting for domain to come up
	I0414 13:52:03.918339 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:03.918920 2222437 main.go:141] libmachine: (test-preload-944524) found domain IP: 192.168.39.64
	I0414 13:52:03.918948 2222437 main.go:141] libmachine: (test-preload-944524) reserving static IP address...
	I0414 13:52:03.918968 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has current primary IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:03.919455 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "test-preload-944524", mac: "52:54:00:98:cd:71", ip: "192.168.39.64"} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:03.919498 2222437 main.go:141] libmachine: (test-preload-944524) DBG | skip adding static IP to network mk-test-preload-944524 - found existing host DHCP lease matching {name: "test-preload-944524", mac: "52:54:00:98:cd:71", ip: "192.168.39.64"}
	I0414 13:52:03.919516 2222437 main.go:141] libmachine: (test-preload-944524) reserved static IP address 192.168.39.64 for domain test-preload-944524
	I0414 13:52:03.919538 2222437 main.go:141] libmachine: (test-preload-944524) waiting for SSH...
	I0414 13:52:03.919554 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Getting to WaitForSSH function...
	I0414 13:52:03.921712 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:03.922100 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:03.922157 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:03.922237 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Using SSH client type: external
	I0414 13:52:03.922284 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa (-rw-------)
	I0414 13:52:03.922325 2222437 main.go:141] libmachine: (test-preload-944524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:52:03.922344 2222437 main.go:141] libmachine: (test-preload-944524) DBG | About to run SSH command:
	I0414 13:52:03.922360 2222437 main.go:141] libmachine: (test-preload-944524) DBG | exit 0
	I0414 13:52:04.048785 2222437 main.go:141] libmachine: (test-preload-944524) DBG | SSH cmd err, output: <nil>: 
	I0414 13:52:04.049284 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetConfigRaw
	I0414 13:52:04.049947 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetIP
	I0414 13:52:04.052797 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.053169 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.053195 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.053459 2222437 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/config.json ...
	I0414 13:52:04.053648 2222437 machine.go:93] provisionDockerMachine start ...
	I0414 13:52:04.053667 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:04.053903 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.056118 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.056430 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.056471 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.056542 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:04.056698 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.056868 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.057040 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:04.057231 2222437 main.go:141] libmachine: Using SSH client type: native
	I0414 13:52:04.057524 2222437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0414 13:52:04.057535 2222437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 13:52:04.165063 2222437 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 13:52:04.165096 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetMachineName
	I0414 13:52:04.165448 2222437 buildroot.go:166] provisioning hostname "test-preload-944524"
	I0414 13:52:04.165484 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetMachineName
	I0414 13:52:04.165699 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.168605 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.169067 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.169108 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.169273 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:04.169464 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.169651 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.169783 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:04.169955 2222437 main.go:141] libmachine: Using SSH client type: native
	I0414 13:52:04.170164 2222437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0414 13:52:04.170175 2222437 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-944524 && echo "test-preload-944524" | sudo tee /etc/hostname
	I0414 13:52:04.291786 2222437 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-944524
	
	I0414 13:52:04.291844 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.294984 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.295440 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.295467 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.295697 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:04.295891 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.296044 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.296169 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:04.296385 2222437 main.go:141] libmachine: Using SSH client type: native
	I0414 13:52:04.296839 2222437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0414 13:52:04.296870 2222437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-944524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-944524/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-944524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:52:04.410195 2222437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:52:04.410232 2222437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 13:52:04.410257 2222437 buildroot.go:174] setting up certificates
	I0414 13:52:04.410269 2222437 provision.go:84] configureAuth start
	I0414 13:52:04.410278 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetMachineName
	I0414 13:52:04.410601 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetIP
	I0414 13:52:04.413281 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.413665 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.413688 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.413825 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.416322 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.416646 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.416686 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.416798 2222437 provision.go:143] copyHostCerts
	I0414 13:52:04.416861 2222437 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 13:52:04.416886 2222437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 13:52:04.416956 2222437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 13:52:04.417051 2222437 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 13:52:04.417059 2222437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 13:52:04.417082 2222437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 13:52:04.417147 2222437 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 13:52:04.417154 2222437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 13:52:04.417174 2222437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 13:52:04.417223 2222437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.test-preload-944524 san=[127.0.0.1 192.168.39.64 localhost minikube test-preload-944524]
	I0414 13:52:04.628824 2222437 provision.go:177] copyRemoteCerts
	I0414 13:52:04.628895 2222437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:52:04.628924 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.631688 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.632069 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.632104 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.632265 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:04.632581 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.632802 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:04.632984 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:04.715565 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:52:04.741870 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0414 13:52:04.766995 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 13:52:04.794538 2222437 provision.go:87] duration metric: took 384.254975ms to configureAuth
	I0414 13:52:04.794573 2222437 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:52:04.794805 2222437 config.go:182] Loaded profile config "test-preload-944524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:52:04.794931 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:04.797796 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.798149 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:04.798182 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:04.798320 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:04.798553 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.798749 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:04.798901 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:04.799082 2222437 main.go:141] libmachine: Using SSH client type: native
	I0414 13:52:04.799274 2222437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0414 13:52:04.799289 2222437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:52:05.039504 2222437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:52:05.039538 2222437 machine.go:96] duration metric: took 985.876768ms to provisionDockerMachine
	I0414 13:52:05.039553 2222437 start.go:293] postStartSetup for "test-preload-944524" (driver="kvm2")
	I0414 13:52:05.039567 2222437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:52:05.039603 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:05.039951 2222437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:52:05.039985 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:05.042926 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.043263 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:05.043287 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.043495 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:05.043687 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:05.043812 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:05.043973 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:05.127936 2222437 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:52:05.132315 2222437 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:52:05.132344 2222437 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 13:52:05.132414 2222437 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 13:52:05.132485 2222437 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 13:52:05.132578 2222437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:52:05.141666 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 13:52:05.167903 2222437 start.go:296] duration metric: took 128.331417ms for postStartSetup
	I0414 13:52:05.167946 2222437 fix.go:56] duration metric: took 20.09494828s for fixHost
	I0414 13:52:05.167973 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:05.170928 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.171372 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:05.171404 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.171608 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:05.171827 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:05.172008 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:05.172190 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:05.172372 2222437 main.go:141] libmachine: Using SSH client type: native
	I0414 13:52:05.172640 2222437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0414 13:52:05.172654 2222437 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:52:05.277740 2222437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638725.252607596
	
	I0414 13:52:05.277770 2222437 fix.go:216] guest clock: 1744638725.252607596
	I0414 13:52:05.277781 2222437 fix.go:229] Guest: 2025-04-14 13:52:05.252607596 +0000 UTC Remote: 2025-04-14 13:52:05.167950475 +0000 UTC m=+39.615668297 (delta=84.657121ms)
	I0414 13:52:05.277851 2222437 fix.go:200] guest clock delta is within tolerance: 84.657121ms
	I0414 13:52:05.277864 2222437 start.go:83] releasing machines lock for "test-preload-944524", held for 20.204886144s
	I0414 13:52:05.277892 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:05.278212 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetIP
	I0414 13:52:05.281127 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.281484 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:05.281516 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.281650 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:05.282157 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:05.282335 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:05.282441 2222437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:52:05.282494 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:05.282559 2222437 ssh_runner.go:195] Run: cat /version.json
	I0414 13:52:05.282584 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:05.285436 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.285606 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.285805 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:05.285837 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.285927 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:05.285948 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:05.286145 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:05.286200 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:05.286317 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:05.286393 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:05.286505 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:05.286519 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:05.286694 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:05.286700 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:05.391129 2222437 ssh_runner.go:195] Run: systemctl --version
	I0414 13:52:05.397259 2222437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:52:05.542027 2222437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:52:05.548569 2222437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:52:05.548633 2222437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:52:05.564530 2222437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:52:05.564559 2222437 start.go:495] detecting cgroup driver to use...
	I0414 13:52:05.564637 2222437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:52:05.581444 2222437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:52:05.594477 2222437 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:52:05.594547 2222437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:52:05.609774 2222437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:52:05.624838 2222437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:52:05.737697 2222437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:52:05.896535 2222437 docker.go:233] disabling docker service ...
	I0414 13:52:05.896624 2222437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:52:05.911405 2222437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:52:05.924610 2222437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:52:06.045026 2222437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:52:06.157383 2222437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:52:06.171442 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:52:06.190547 2222437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0414 13:52:06.190634 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.200999 2222437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:52:06.201063 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.211486 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.221515 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.231500 2222437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:52:06.241797 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.251911 2222437 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.269240 2222437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:52:06.280293 2222437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:52:06.289692 2222437 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:52:06.289742 2222437 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:52:06.302955 2222437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:52:06.312778 2222437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:52:06.420941 2222437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:52:06.511170 2222437 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:52:06.511259 2222437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:52:06.516266 2222437 start.go:563] Will wait 60s for crictl version
	I0414 13:52:06.516333 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:06.520024 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:52:06.557827 2222437 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:52:06.557913 2222437 ssh_runner.go:195] Run: crio --version
	I0414 13:52:06.587954 2222437 ssh_runner.go:195] Run: crio --version
	I0414 13:52:06.617382 2222437 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0414 13:52:06.618679 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetIP
	I0414 13:52:06.621545 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:06.621951 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:06.621982 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:06.622219 2222437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 13:52:06.626390 2222437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:52:06.639476 2222437 kubeadm.go:883] updating cluster {Name:test-preload-944524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-944524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:52:06.639582 2222437 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:52:06.639621 2222437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:52:06.675644 2222437 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 13:52:06.675730 2222437 ssh_runner.go:195] Run: which lz4
	I0414 13:52:06.679913 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:52:06.684182 2222437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:52:06.684210 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0414 13:52:08.258425 2222437 crio.go:462] duration metric: took 1.578547415s to copy over tarball
	I0414 13:52:08.258522 2222437 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:52:10.651782 2222437 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.393212868s)
	I0414 13:52:10.651821 2222437 crio.go:469] duration metric: took 2.393343139s to extract the tarball
	I0414 13:52:10.651832 2222437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:52:10.693219 2222437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:52:10.741762 2222437 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 13:52:10.741794 2222437 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:52:10.741868 2222437 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:52:10.741886 2222437 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.741922 2222437 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:10.741924 2222437 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0414 13:52:10.741951 2222437 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:10.741972 2222437 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:10.742034 2222437 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:10.742051 2222437 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:10.743505 2222437 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:10.743511 2222437 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:10.743560 2222437 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.743584 2222437 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0414 13:52:10.743583 2222437 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:10.743560 2222437 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:52:10.743505 2222437 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:10.743943 2222437 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:10.898945 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.945665 2222437 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0414 13:52:10.945715 2222437 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.945764 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:10.946744 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:10.950302 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.999605 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:10.999642 2222437 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0414 13:52:10.999681 2222437 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:10.999711 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:11.031621 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:11.031749 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:52:11.062078 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0414 13:52:11.063494 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:11.064642 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:11.086937 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:11.086983 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0414 13:52:11.087088 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:52:11.173015 2222437 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0414 13:52:11.173063 2222437 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:11.173130 2222437 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0414 13:52:11.173147 2222437 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0414 13:52:11.173165 2222437 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:11.173178 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0414 13:52:11.173179 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:52:11.173184 2222437 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0414 13:52:11.173207 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:11.173221 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:11.173139 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:11.173187 2222437 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:52:11.173278 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:52:11.180043 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:11.200371 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:13.619063 2222437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:52:14.687613 2222437 ssh_runner.go:235] Completed: which crictl: (3.514377078s)
	I0414 13:52:14.687694 2222437 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.514487547s)
	I0414 13:52:14.687716 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:14.687750 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0414 13:52:14.687776 2222437 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.514474353s)
	I0414 13:52:14.687789 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0414 13:52:14.687813 2222437 ssh_runner.go:235] Completed: which crictl: (3.514575143s)
	I0414 13:52:14.687853 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:52:14.687881 2222437 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (3.507817081s)
	I0414 13:52:14.687884 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:52:14.687917 2222437 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0414 13:52:14.687837 2222437 ssh_runner.go:235] Completed: which crictl: (3.514598252s)
	I0414 13:52:14.687949 2222437 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.068856296s)
	I0414 13:52:14.687955 2222437 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:14.687971 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:14.687992 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:14.687919 2222437 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4: (3.48751973s)
	I0414 13:52:14.688084 2222437 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0414 13:52:14.688121 2222437 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:14.688161 2222437 ssh_runner.go:195] Run: which crictl
	I0414 13:52:14.752877 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0414 13:52:14.752906 2222437 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:52:14.752962 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:52:14.752982 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:14.765109 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:14.765174 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:52:14.765200 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:14.765311 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:15.299293 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0414 13:52:15.299448 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:52:15.299589 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:15.299611 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:52:15.299715 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:52:15.299744 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:15.370312 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0414 13:52:15.370409 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:52:15.389545 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0414 13:52:15.389668 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:52:15.400664 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:52:15.400705 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0414 13:52:15.400815 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0414 13:52:15.401984 2222437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:52:15.401997 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0414 13:52:15.402012 2222437 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:52:15.402041 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:52:15.413460 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0414 13:52:15.462355 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0414 13:52:15.462417 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0414 13:52:15.462496 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:52:15.462502 2222437 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0414 13:52:15.462635 2222437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:52:17.491879 2222437 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.089812559s)
	I0414 13:52:17.491921 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0414 13:52:17.491960 2222437 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:52:17.491980 2222437 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.029457071s)
	I0414 13:52:17.492017 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0414 13:52:17.492031 2222437 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.029377325s)
	I0414 13:52:17.492040 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:52:17.492053 2222437 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0414 13:52:17.944843 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0414 13:52:17.944902 2222437 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0414 13:52:17.944954 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0414 13:52:18.092913 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0414 13:52:18.092973 2222437 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:52:18.093046 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:52:18.840295 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0414 13:52:18.840354 2222437 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:52:18.840415 2222437 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:52:19.590776 2222437 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0414 13:52:19.590830 2222437 cache_images.go:123] Successfully loaded all cached images
	I0414 13:52:19.590836 2222437 cache_images.go:92] duration metric: took 8.849029825s to LoadCachedImages
	I0414 13:52:19.590849 2222437 kubeadm.go:934] updating node { 192.168.39.64 8443 v1.24.4 crio true true} ...
	I0414 13:52:19.591007 2222437 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-944524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-944524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:52:19.591099 2222437 ssh_runner.go:195] Run: crio config
	I0414 13:52:19.640511 2222437 cni.go:84] Creating CNI manager for ""
	I0414 13:52:19.640539 2222437 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:52:19.640549 2222437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:52:19.640567 2222437 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-944524 NodeName:test-preload-944524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:52:19.640699 2222437 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-944524"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:52:19.640787 2222437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0414 13:52:19.651361 2222437 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:52:19.651439 2222437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:52:19.661409 2222437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0414 13:52:19.677777 2222437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:52:19.693630 2222437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0414 13:52:19.710019 2222437 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0414 13:52:19.713913 2222437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:52:19.726433 2222437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:52:19.834078 2222437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:52:19.851291 2222437 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524 for IP: 192.168.39.64
	I0414 13:52:19.851315 2222437 certs.go:194] generating shared ca certs ...
	I0414 13:52:19.851335 2222437 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:52:19.851543 2222437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 13:52:19.851601 2222437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 13:52:19.851615 2222437 certs.go:256] generating profile certs ...
	I0414 13:52:19.851723 2222437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/client.key
	I0414 13:52:19.851807 2222437 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/apiserver.key.5a660bda
	I0414 13:52:19.851861 2222437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/proxy-client.key
	I0414 13:52:19.852028 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 13:52:19.852070 2222437 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 13:52:19.852084 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:52:19.852121 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:52:19.852152 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:52:19.852182 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 13:52:19.852233 2222437 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 13:52:19.852925 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:52:19.894243 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 13:52:19.919815 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:52:19.950731 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:52:19.980031 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 13:52:20.010064 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:52:20.040038 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:52:20.078147 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 13:52:20.102112 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 13:52:20.125709 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 13:52:20.149253 2222437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:52:20.172568 2222437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:52:20.189114 2222437 ssh_runner.go:195] Run: openssl version
	I0414 13:52:20.194997 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:52:20.206125 2222437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:52:20.210572 2222437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:52:20.210645 2222437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:52:20.216866 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:52:20.228568 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 13:52:20.239927 2222437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 13:52:20.244470 2222437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 13:52:20.244536 2222437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 13:52:20.250487 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 13:52:20.261526 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 13:52:20.272249 2222437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 13:52:20.276601 2222437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 13:52:20.276653 2222437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 13:52:20.282157 2222437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:52:20.292750 2222437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:52:20.297179 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 13:52:20.302957 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 13:52:20.308776 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 13:52:20.314688 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 13:52:20.320638 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 13:52:20.326699 2222437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 13:52:20.332599 2222437 kubeadm.go:392] StartCluster: {Name:test-preload-944524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
944524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:52:20.332691 2222437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:52:20.332792 2222437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:52:20.387854 2222437 cri.go:89] found id: ""
	I0414 13:52:20.387950 2222437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:52:20.398854 2222437 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 13:52:20.398897 2222437 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 13:52:20.398953 2222437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 13:52:20.409417 2222437 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:52:20.409876 2222437 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-944524" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:52:20.410036 2222437 kubeconfig.go:62] /home/jenkins/minikube-integration/20623-2183077/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-944524" cluster setting kubeconfig missing "test-preload-944524" context setting]
	I0414 13:52:20.410347 2222437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:52:20.410876 2222437 kapi.go:59] client config for test-preload-944524: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/client.crt", KeyFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/client.key", CAFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 13:52:20.411368 2222437 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 13:52:20.411384 2222437 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 13:52:20.411391 2222437 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 13:52:20.411397 2222437 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 13:52:20.411744 2222437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 13:52:20.421742 2222437 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.64
	I0414 13:52:20.421785 2222437 kubeadm.go:1160] stopping kube-system containers ...
	I0414 13:52:20.421799 2222437 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 13:52:20.421890 2222437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:52:20.463889 2222437 cri.go:89] found id: ""
	I0414 13:52:20.463997 2222437 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 13:52:20.481481 2222437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:52:20.491453 2222437 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:52:20.491478 2222437 kubeadm.go:157] found existing configuration files:
	
	I0414 13:52:20.491530 2222437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:52:20.501235 2222437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:52:20.501289 2222437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:52:20.510782 2222437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:52:20.519955 2222437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:52:20.520010 2222437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:52:20.529553 2222437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:52:20.538884 2222437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:52:20.538948 2222437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:52:20.548405 2222437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:52:20.557465 2222437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:52:20.557522 2222437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:52:20.567238 2222437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:52:20.577432 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:20.672378 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:21.260193 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:21.522928 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:21.595074 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:21.676499 2222437 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:52:21.676629 2222437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:52:22.177646 2222437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:52:22.677448 2222437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:52:22.702940 2222437 api_server.go:72] duration metric: took 1.026439061s to wait for apiserver process to appear ...
	I0414 13:52:22.702981 2222437 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:52:22.703009 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:22.703657 2222437 api_server.go:269] stopped: https://192.168.39.64:8443/healthz: Get "https://192.168.39.64:8443/healthz": dial tcp 192.168.39.64:8443: connect: connection refused
	I0414 13:52:23.203287 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:23.204013 2222437 api_server.go:269] stopped: https://192.168.39.64:8443/healthz: Get "https://192.168.39.64:8443/healthz": dial tcp 192.168.39.64:8443: connect: connection refused
	I0414 13:52:23.703764 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:26.521065 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 13:52:26.521098 2222437 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 13:52:26.521148 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:26.551599 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 13:52:26.551628 2222437 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 13:52:26.704017 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:26.717534 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 13:52:26.717575 2222437 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 13:52:27.203244 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:27.208588 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 13:52:27.208632 2222437 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 13:52:27.703267 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:27.710441 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 13:52:27.710476 2222437 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 13:52:28.203137 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:28.209087 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0414 13:52:28.217383 2222437 api_server.go:141] control plane version: v1.24.4
	I0414 13:52:28.217410 2222437 api_server.go:131] duration metric: took 5.514421497s to wait for apiserver health ...
	I0414 13:52:28.217420 2222437 cni.go:84] Creating CNI manager for ""
	I0414 13:52:28.217426 2222437 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:52:28.219224 2222437 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 13:52:28.220330 2222437 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 13:52:28.231439 2222437 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 13:52:28.250505 2222437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:52:28.254975 2222437 system_pods.go:59] 7 kube-system pods found
	I0414 13:52:28.255010 2222437 system_pods.go:61] "coredns-6d4b75cb6d-c2p94" [5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 13:52:28.255021 2222437 system_pods.go:61] "etcd-test-preload-944524" [0c3c5b2f-6685-4b87-a22f-f5088939e140] Running
	I0414 13:52:28.255028 2222437 system_pods.go:61] "kube-apiserver-test-preload-944524" [359ee1f5-692b-4509-854b-352b5bbc76b9] Running
	I0414 13:52:28.255033 2222437 system_pods.go:61] "kube-controller-manager-test-preload-944524" [5a3a1589-333e-4bf7-85af-c68207ecec60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 13:52:28.255038 2222437 system_pods.go:61] "kube-proxy-kgqdm" [13fd9f92-d334-4ecd-a25c-71a452dea8d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 13:52:28.255042 2222437 system_pods.go:61] "kube-scheduler-test-preload-944524" [edd6f02d-79fd-45bb-a027-847d468416b4] Running
	I0414 13:52:28.255049 2222437 system_pods.go:61] "storage-provisioner" [1fdc0e8c-7d00-47f8-b85e-fb37a6c90300] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 13:52:28.255055 2222437 system_pods.go:74] duration metric: took 4.525485ms to wait for pod list to return data ...
	I0414 13:52:28.255062 2222437 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:52:28.257229 2222437 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:52:28.257254 2222437 node_conditions.go:123] node cpu capacity is 2
	I0414 13:52:28.257266 2222437 node_conditions.go:105] duration metric: took 2.199581ms to run NodePressure ...
	I0414 13:52:28.257316 2222437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:52:28.472602 2222437 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 13:52:28.475679 2222437 kubeadm.go:739] kubelet initialised
	I0414 13:52:28.475699 2222437 kubeadm.go:740] duration metric: took 3.072022ms waiting for restarted kubelet to initialise ...
	I0414 13:52:28.475708 2222437 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:52:28.482496 2222437 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:28.486350 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.486370 2222437 pod_ready.go:82] duration metric: took 3.850684ms for pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:28.486379 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.486385 2222437 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:28.490225 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "etcd-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.490254 2222437 pod_ready.go:82] duration metric: took 3.858163ms for pod "etcd-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:28.490268 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "etcd-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.490285 2222437 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:28.494485 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "kube-apiserver-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.494511 2222437 pod_ready.go:82] duration metric: took 4.212323ms for pod "kube-apiserver-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:28.494522 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "kube-apiserver-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.494531 2222437 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:28.665120 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.665154 2222437 pod_ready.go:82] duration metric: took 170.611717ms for pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:28.665163 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:28.665169 2222437 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kgqdm" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:29.053708 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "kube-proxy-kgqdm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:29.053748 2222437 pod_ready.go:82] duration metric: took 388.569012ms for pod "kube-proxy-kgqdm" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:29.053765 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "kube-proxy-kgqdm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:29.053775 2222437 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:29.454011 2222437 pod_ready.go:98] node "test-preload-944524" hosting pod "kube-scheduler-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:29.454053 2222437 pod_ready.go:82] duration metric: took 400.267501ms for pod "kube-scheduler-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	E0414 13:52:29.454069 2222437 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-944524" hosting pod "kube-scheduler-test-preload-944524" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:29.454080 2222437 pod_ready.go:39] duration metric: took 978.361534ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:52:29.454120 2222437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:52:29.466055 2222437 ops.go:34] apiserver oom_adj: -16
	I0414 13:52:29.466078 2222437 kubeadm.go:597] duration metric: took 9.067174876s to restartPrimaryControlPlane
	I0414 13:52:29.466089 2222437 kubeadm.go:394] duration metric: took 9.133499478s to StartCluster
	I0414 13:52:29.466115 2222437 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:52:29.466208 2222437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:52:29.466847 2222437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:52:29.467118 2222437 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:52:29.467204 2222437 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:52:29.467316 2222437 addons.go:69] Setting storage-provisioner=true in profile "test-preload-944524"
	I0414 13:52:29.467338 2222437 addons.go:69] Setting default-storageclass=true in profile "test-preload-944524"
	I0414 13:52:29.467360 2222437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-944524"
	I0414 13:52:29.467376 2222437 config.go:182] Loaded profile config "test-preload-944524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:52:29.467341 2222437 addons.go:238] Setting addon storage-provisioner=true in "test-preload-944524"
	W0414 13:52:29.467452 2222437 addons.go:247] addon storage-provisioner should already be in state true
	I0414 13:52:29.467493 2222437 host.go:66] Checking if "test-preload-944524" exists ...
	I0414 13:52:29.467753 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:52:29.467809 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:52:29.467917 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:52:29.467949 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:52:29.468661 2222437 out.go:177] * Verifying Kubernetes components...
	I0414 13:52:29.469938 2222437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:52:29.483498 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0414 13:52:29.483512 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0414 13:52:29.483970 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:52:29.484044 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:52:29.484506 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:52:29.484521 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:52:29.484634 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:52:29.484656 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:52:29.484935 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:52:29.485000 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:52:29.485332 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetState
	I0414 13:52:29.485532 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:52:29.485578 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:52:29.487413 2222437 kapi.go:59] client config for test-preload-944524: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/client.crt", KeyFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/test-preload-944524/client.key", CAFile:"/home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 13:52:29.487685 2222437 addons.go:238] Setting addon default-storageclass=true in "test-preload-944524"
	W0414 13:52:29.487700 2222437 addons.go:247] addon default-storageclass should already be in state true
	I0414 13:52:29.487726 2222437 host.go:66] Checking if "test-preload-944524" exists ...
	I0414 13:52:29.488000 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:52:29.488035 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:52:29.502201 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0414 13:52:29.502405 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0414 13:52:29.502751 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:52:29.502884 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:52:29.503329 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:52:29.503330 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:52:29.503358 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:52:29.503368 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:52:29.503731 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:52:29.503743 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:52:29.503956 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetState
	I0414 13:52:29.504383 2222437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:52:29.504438 2222437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:52:29.505757 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:29.507607 2222437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:52:29.508766 2222437 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:52:29.508795 2222437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:52:29.508809 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:29.511458 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:29.511939 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:29.511974 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:29.512145 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:29.512285 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:29.512406 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:29.512519 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:29.533012 2222437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0414 13:52:29.533480 2222437 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:52:29.533922 2222437 main.go:141] libmachine: Using API Version  1
	I0414 13:52:29.533946 2222437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:52:29.534336 2222437 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:52:29.534582 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetState
	I0414 13:52:29.536409 2222437 main.go:141] libmachine: (test-preload-944524) Calling .DriverName
	I0414 13:52:29.536638 2222437 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:52:29.536658 2222437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:52:29.536677 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHHostname
	I0414 13:52:29.539449 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:29.539824 2222437 main.go:141] libmachine: (test-preload-944524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:cd:71", ip: ""} in network mk-test-preload-944524: {Iface:virbr1 ExpiryTime:2025-04-14 14:51:56 +0000 UTC Type:0 Mac:52:54:00:98:cd:71 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-944524 Clientid:01:52:54:00:98:cd:71}
	I0414 13:52:29.539853 2222437 main.go:141] libmachine: (test-preload-944524) DBG | domain test-preload-944524 has defined IP address 192.168.39.64 and MAC address 52:54:00:98:cd:71 in network mk-test-preload-944524
	I0414 13:52:29.539993 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHPort
	I0414 13:52:29.540177 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHKeyPath
	I0414 13:52:29.540298 2222437 main.go:141] libmachine: (test-preload-944524) Calling .GetSSHUsername
	I0414 13:52:29.540481 2222437 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/test-preload-944524/id_rsa Username:docker}
	I0414 13:52:29.653749 2222437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:52:29.675226 2222437 node_ready.go:35] waiting up to 6m0s for node "test-preload-944524" to be "Ready" ...
	I0414 13:52:29.735349 2222437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:52:29.757612 2222437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:52:30.755202 2222437 main.go:141] libmachine: Making call to close driver server
	I0414 13:52:30.755228 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Close
	I0414 13:52:30.755289 2222437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.019890513s)
	I0414 13:52:30.755334 2222437 main.go:141] libmachine: Making call to close driver server
	I0414 13:52:30.755351 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Close
	I0414 13:52:30.755639 2222437 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:52:30.755665 2222437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:52:30.755674 2222437 main.go:141] libmachine: Making call to close driver server
	I0414 13:52:30.755681 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Close
	I0414 13:52:30.755702 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Closing plugin on server side
	I0414 13:52:30.755730 2222437 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:52:30.755750 2222437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:52:30.755776 2222437 main.go:141] libmachine: Making call to close driver server
	I0414 13:52:30.755787 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Close
	I0414 13:52:30.755921 2222437 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:52:30.755936 2222437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:52:30.756264 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Closing plugin on server side
	I0414 13:52:30.756263 2222437 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:52:30.756283 2222437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:52:30.764062 2222437 main.go:141] libmachine: Making call to close driver server
	I0414 13:52:30.764080 2222437 main.go:141] libmachine: (test-preload-944524) Calling .Close
	I0414 13:52:30.764342 2222437 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:52:30.764358 2222437 main.go:141] libmachine: (test-preload-944524) DBG | Closing plugin on server side
	I0414 13:52:30.764361 2222437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:52:30.766675 2222437 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 13:52:30.767720 2222437 addons.go:514] duration metric: took 1.300531208s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 13:52:31.679115 2222437 node_ready.go:53] node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:33.679762 2222437 node_ready.go:53] node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:36.179362 2222437 node_ready.go:53] node "test-preload-944524" has status "Ready":"False"
	I0414 13:52:37.678889 2222437 node_ready.go:49] node "test-preload-944524" has status "Ready":"True"
	I0414 13:52:37.678918 2222437 node_ready.go:38] duration metric: took 8.00365691s for node "test-preload-944524" to be "Ready" ...
	I0414 13:52:37.678931 2222437 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:52:37.682346 2222437 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:37.686599 2222437 pod_ready.go:93] pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:37.686629 2222437 pod_ready.go:82] duration metric: took 4.25454ms for pod "coredns-6d4b75cb6d-c2p94" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:37.686642 2222437 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:39.692678 2222437 pod_ready.go:103] pod "etcd-test-preload-944524" in "kube-system" namespace has status "Ready":"False"
	I0414 13:52:40.193525 2222437 pod_ready.go:93] pod "etcd-test-preload-944524" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:40.193551 2222437 pod_ready.go:82] duration metric: took 2.506900732s for pod "etcd-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:40.193560 2222437 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:40.197655 2222437 pod_ready.go:93] pod "kube-apiserver-test-preload-944524" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:40.197675 2222437 pod_ready.go:82] duration metric: took 4.109359ms for pod "kube-apiserver-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:40.197686 2222437 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.202639 2222437 pod_ready.go:93] pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:42.202668 2222437 pod_ready.go:82] duration metric: took 2.004975439s for pod "kube-controller-manager-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.202677 2222437 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kgqdm" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.206560 2222437 pod_ready.go:93] pod "kube-proxy-kgqdm" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:42.206575 2222437 pod_ready.go:82] duration metric: took 3.892433ms for pod "kube-proxy-kgqdm" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.206582 2222437 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.210163 2222437 pod_ready.go:93] pod "kube-scheduler-test-preload-944524" in "kube-system" namespace has status "Ready":"True"
	I0414 13:52:42.210180 2222437 pod_ready.go:82] duration metric: took 3.592457ms for pod "kube-scheduler-test-preload-944524" in "kube-system" namespace to be "Ready" ...
	I0414 13:52:42.210190 2222437 pod_ready.go:39] duration metric: took 4.531246187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:52:42.210206 2222437 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:52:42.210259 2222437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:52:42.225830 2222437 api_server.go:72] duration metric: took 12.758676988s to wait for apiserver process to appear ...
	I0414 13:52:42.225852 2222437 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:52:42.225868 2222437 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0414 13:52:42.233596 2222437 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0414 13:52:42.234749 2222437 api_server.go:141] control plane version: v1.24.4
	I0414 13:52:42.234771 2222437 api_server.go:131] duration metric: took 8.91265ms to wait for apiserver health ...
	I0414 13:52:42.234778 2222437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:52:42.281622 2222437 system_pods.go:59] 7 kube-system pods found
	I0414 13:52:42.281653 2222437 system_pods.go:61] "coredns-6d4b75cb6d-c2p94" [5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb] Running
	I0414 13:52:42.281658 2222437 system_pods.go:61] "etcd-test-preload-944524" [0c3c5b2f-6685-4b87-a22f-f5088939e140] Running
	I0414 13:52:42.281661 2222437 system_pods.go:61] "kube-apiserver-test-preload-944524" [359ee1f5-692b-4509-854b-352b5bbc76b9] Running
	I0414 13:52:42.281665 2222437 system_pods.go:61] "kube-controller-manager-test-preload-944524" [5a3a1589-333e-4bf7-85af-c68207ecec60] Running
	I0414 13:52:42.281668 2222437 system_pods.go:61] "kube-proxy-kgqdm" [13fd9f92-d334-4ecd-a25c-71a452dea8d9] Running
	I0414 13:52:42.281671 2222437 system_pods.go:61] "kube-scheduler-test-preload-944524" [edd6f02d-79fd-45bb-a027-847d468416b4] Running
	I0414 13:52:42.281674 2222437 system_pods.go:61] "storage-provisioner" [1fdc0e8c-7d00-47f8-b85e-fb37a6c90300] Running
	I0414 13:52:42.281680 2222437 system_pods.go:74] duration metric: took 46.896422ms to wait for pod list to return data ...
	I0414 13:52:42.281689 2222437 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:52:42.478784 2222437 default_sa.go:45] found service account: "default"
	I0414 13:52:42.478816 2222437 default_sa.go:55] duration metric: took 197.119327ms for default service account to be created ...
	I0414 13:52:42.478829 2222437 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:52:42.680593 2222437 system_pods.go:86] 7 kube-system pods found
	I0414 13:52:42.680637 2222437 system_pods.go:89] "coredns-6d4b75cb6d-c2p94" [5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb] Running
	I0414 13:52:42.680646 2222437 system_pods.go:89] "etcd-test-preload-944524" [0c3c5b2f-6685-4b87-a22f-f5088939e140] Running
	I0414 13:52:42.680652 2222437 system_pods.go:89] "kube-apiserver-test-preload-944524" [359ee1f5-692b-4509-854b-352b5bbc76b9] Running
	I0414 13:52:42.680657 2222437 system_pods.go:89] "kube-controller-manager-test-preload-944524" [5a3a1589-333e-4bf7-85af-c68207ecec60] Running
	I0414 13:52:42.680662 2222437 system_pods.go:89] "kube-proxy-kgqdm" [13fd9f92-d334-4ecd-a25c-71a452dea8d9] Running
	I0414 13:52:42.680667 2222437 system_pods.go:89] "kube-scheduler-test-preload-944524" [edd6f02d-79fd-45bb-a027-847d468416b4] Running
	I0414 13:52:42.680672 2222437 system_pods.go:89] "storage-provisioner" [1fdc0e8c-7d00-47f8-b85e-fb37a6c90300] Running
	I0414 13:52:42.680681 2222437 system_pods.go:126] duration metric: took 201.844918ms to wait for k8s-apps to be running ...
	I0414 13:52:42.680691 2222437 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:52:42.680770 2222437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:52:42.696102 2222437 system_svc.go:56] duration metric: took 15.400596ms WaitForService to wait for kubelet
	I0414 13:52:42.696135 2222437 kubeadm.go:582] duration metric: took 13.228985991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:52:42.696158 2222437 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:52:42.879911 2222437 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:52:42.879944 2222437 node_conditions.go:123] node cpu capacity is 2
	I0414 13:52:42.879957 2222437 node_conditions.go:105] duration metric: took 183.793395ms to run NodePressure ...
	I0414 13:52:42.879974 2222437 start.go:241] waiting for startup goroutines ...
	I0414 13:52:42.879984 2222437 start.go:246] waiting for cluster config update ...
	I0414 13:52:42.880009 2222437 start.go:255] writing updated cluster config ...
	I0414 13:52:42.880369 2222437 ssh_runner.go:195] Run: rm -f paused
	I0414 13:52:42.928322 2222437 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0414 13:52:42.930088 2222437 out.go:201] 
	W0414 13:52:42.931346 2222437 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0414 13:52:42.932478 2222437 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0414 13:52:42.933666 2222437 out.go:177] * Done! kubectl is now configured to use "test-preload-944524" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.844771699Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:49b89d05087fc548ddc26421ea1bc465d4434763ec62bc53e951cdf8f65485b9,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-c2p94,Uid:5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638754574514806,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2p94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T13:52:26.662799428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f14394a9d49678c5b2a1d1e0e9499b094d2c116e94b630b1a8193a521cc21e39,Metadata:&PodSandboxMetadata{Name:kube-proxy-kgqdm,Uid:13fd9f92-d334-4ecd-a25c-71a452dea8d9,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1744638747575171318,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kgqdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13fd9f92-d334-4ecd-a25c-71a452dea8d9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T13:52:26.662776871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3dfc91cd2ea72372ce43229ef05b4eb94170cf2aff0028441aac5773c6c2b6e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1fdc0e8c-7d00-47f8-b85e-fb37a6c90300,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638747273797777,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdc0e8c-7d00-47f8-b85e-fb37
a6c90300,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-14T13:52:26.662779109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf24ac5c7cbf45102cf5e19b0cd0a8038c8b36683ea870dc6f6599c328d57b8e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-944524,Uid:2ac9bdb
70006d8a252912dae3175d343,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638742204504752,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac9bdb70006d8a252912dae3175d343,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.64:8443,kubernetes.io/config.hash: 2ac9bdb70006d8a252912dae3175d343,kubernetes.io/config.seen: 2025-04-14T13:52:21.659945814Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0fb10ab6bf0888231efb46b39bdce8e81405c1c6dd1f672eb7ad29c6bb987282,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-944524,Uid:66cf396ac2536ba6cd09c4704a163317,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638742197320167,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-t
est-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cf396ac2536ba6cd09c4704a163317,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.64:2379,kubernetes.io/config.hash: 66cf396ac2536ba6cd09c4704a163317,kubernetes.io/config.seen: 2025-04-14T13:52:21.672843271Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe48affcfd558d4821b336be57338b66409057e07c25e6b2558e4aad24a11b31,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-944524,Uid:cda4ce56c5c4a22335b8ea45e823a434,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638742192708122,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda4ce56c5c4a22335b8ea45e823a434,tier: control-plane,},Annotations:map[string]string{kubernetes.io/con
fig.hash: cda4ce56c5c4a22335b8ea45e823a434,kubernetes.io/config.seen: 2025-04-14T13:52:21.659985929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38738ebe087f303fb2de051a56094e76902e6e62d83b771f02056fc497b687c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-944524,Uid:c077f689161ddf6aa9acdcb5e0c16b59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744638742192159044,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c077f689161ddf6aa9acdcb5e0c16b59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c077f689161ddf6aa9acdcb5e0c16b59,kubernetes.io/config.seen: 2025-04-14T13:52:21.659987099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a92abb6f-97bb-4aab-87f7-79d277807aaa name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.845614164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a07dfe7-7701-4651-a3e0-612de051eb5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.845660652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a07dfe7-7701-4651-a3e0-612de051eb5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.845878047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed28d4c07c9d8022951e19baec861a026484bae3cb22f3bf31426bf11ee7d33f,PodSandboxId:49b89d05087fc548ddc26421ea1bc465d4434763ec62bc53e951cdf8f65485b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744638754804804274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2p94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,},Annotations:map[string]string{io.kubernetes.container.hash: 207d219b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21eecdd8f7621602a7c2bc3d3611cabd7e52ecce9e024e0965b3f481a02073a,PodSandboxId:f14394a9d49678c5b2a1d1e0e9499b094d2c116e94b630b1a8193a521cc21e39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744638747688425412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgqdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 13fd9f92-d334-4ecd-a25c-71a452dea8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd3da63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fec2e246ee8583a6209aa5688315e3bf213a40855496691a80062a7af285fc,PodSandboxId:c3dfc91cd2ea72372ce43229ef05b4eb94170cf2aff0028441aac5773c6c2b6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744638747359853432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f
dc0e8c-7d00-47f8-b85e-fb37a6c90300,},Annotations:map[string]string{io.kubernetes.container.hash: 970a4c6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32740b147b9d83203214c6e9dcff0e1c49ed3272c764398f9876b9061a591dae,PodSandboxId:0fb10ab6bf0888231efb46b39bdce8e81405c1c6dd1f672eb7ad29c6bb987282,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744638742394916225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cf396ac2536ba6cd09c4704a163317,},Anno
tations:map[string]string{io.kubernetes.container.hash: 234b098e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c20cd19dc0070f5e1826f79b06507c2ac058052a0ffdaf97e7a81897e1c8af,PodSandboxId:cf24ac5c7cbf45102cf5e19b0cd0a8038c8b36683ea870dc6f6599c328d57b8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744638742396087454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac9bdb70006d8a252912dae3175d343,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d0c3857,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cbbe225b23dee702f0efe6646738893d2c40e41662c93cd6e1c6977bb43f36a,PodSandboxId:38738ebe087f303fb2de051a56094e76902e6e62d83b771f02056fc497b687c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744638742434708245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c077f689161ddf6aa9acdcb5e0c16b59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a22bc6ac09bfd620c98c76039b7361bdd094a6023f7065db8aa7325f404da21,PodSandboxId:fe48affcfd558d4821b336be57338b66409057e07c25e6b2558e4aad24a11b31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744638742412600609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda4ce56c5c4a22335b8ea45e823a434,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a07dfe7-7701-4651-a3e0-612de051eb5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.871404111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b0bc087-c0ca-4b30-a3e6-7d3e46de2299 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.871467894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b0bc087-c0ca-4b30-a3e6-7d3e46de2299 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.872842706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a727b1c-1cbb-46c5-8ed0-9c612e201d79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.873472650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638763873451749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a727b1c-1cbb-46c5-8ed0-9c612e201d79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.874256422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76571ad6-65fe-4c7a-afcd-60c2c4c9dd53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.874306513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76571ad6-65fe-4c7a-afcd-60c2c4c9dd53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.874509924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed28d4c07c9d8022951e19baec861a026484bae3cb22f3bf31426bf11ee7d33f,PodSandboxId:49b89d05087fc548ddc26421ea1bc465d4434763ec62bc53e951cdf8f65485b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744638754804804274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2p94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,},Annotations:map[string]string{io.kubernetes.container.hash: 207d219b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21eecdd8f7621602a7c2bc3d3611cabd7e52ecce9e024e0965b3f481a02073a,PodSandboxId:f14394a9d49678c5b2a1d1e0e9499b094d2c116e94b630b1a8193a521cc21e39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744638747688425412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgqdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 13fd9f92-d334-4ecd-a25c-71a452dea8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd3da63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fec2e246ee8583a6209aa5688315e3bf213a40855496691a80062a7af285fc,PodSandboxId:c3dfc91cd2ea72372ce43229ef05b4eb94170cf2aff0028441aac5773c6c2b6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744638747359853432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f
dc0e8c-7d00-47f8-b85e-fb37a6c90300,},Annotations:map[string]string{io.kubernetes.container.hash: 970a4c6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32740b147b9d83203214c6e9dcff0e1c49ed3272c764398f9876b9061a591dae,PodSandboxId:0fb10ab6bf0888231efb46b39bdce8e81405c1c6dd1f672eb7ad29c6bb987282,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744638742394916225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cf396ac2536ba6cd09c4704a163317,},Anno
tations:map[string]string{io.kubernetes.container.hash: 234b098e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c20cd19dc0070f5e1826f79b06507c2ac058052a0ffdaf97e7a81897e1c8af,PodSandboxId:cf24ac5c7cbf45102cf5e19b0cd0a8038c8b36683ea870dc6f6599c328d57b8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744638742396087454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac9bdb70006d8a252912dae3175d343,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d0c3857,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cbbe225b23dee702f0efe6646738893d2c40e41662c93cd6e1c6977bb43f36a,PodSandboxId:38738ebe087f303fb2de051a56094e76902e6e62d83b771f02056fc497b687c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744638742434708245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c077f689161ddf6aa9acdcb5e0c16b59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a22bc6ac09bfd620c98c76039b7361bdd094a6023f7065db8aa7325f404da21,PodSandboxId:fe48affcfd558d4821b336be57338b66409057e07c25e6b2558e4aad24a11b31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744638742412600609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda4ce56c5c4a22335b8ea45e823a434,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76571ad6-65fe-4c7a-afcd-60c2c4c9dd53 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.913943537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e62f9649-6e8e-484a-8d32-6397fb4c135f name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.914012157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e62f9649-6e8e-484a-8d32-6397fb4c135f name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.914978691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbf72009-1593-47a6-bc26-f4547a06642b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.915492182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638763915464038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbf72009-1593-47a6-bc26-f4547a06642b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.916000901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc5bc9e-3ee2-425a-b9a7-46b5d2c68669 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.916050811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc5bc9e-3ee2-425a-b9a7-46b5d2c68669 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.916277707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed28d4c07c9d8022951e19baec861a026484bae3cb22f3bf31426bf11ee7d33f,PodSandboxId:49b89d05087fc548ddc26421ea1bc465d4434763ec62bc53e951cdf8f65485b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744638754804804274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2p94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,},Annotations:map[string]string{io.kubernetes.container.hash: 207d219b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21eecdd8f7621602a7c2bc3d3611cabd7e52ecce9e024e0965b3f481a02073a,PodSandboxId:f14394a9d49678c5b2a1d1e0e9499b094d2c116e94b630b1a8193a521cc21e39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744638747688425412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgqdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 13fd9f92-d334-4ecd-a25c-71a452dea8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd3da63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fec2e246ee8583a6209aa5688315e3bf213a40855496691a80062a7af285fc,PodSandboxId:c3dfc91cd2ea72372ce43229ef05b4eb94170cf2aff0028441aac5773c6c2b6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744638747359853432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f
dc0e8c-7d00-47f8-b85e-fb37a6c90300,},Annotations:map[string]string{io.kubernetes.container.hash: 970a4c6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32740b147b9d83203214c6e9dcff0e1c49ed3272c764398f9876b9061a591dae,PodSandboxId:0fb10ab6bf0888231efb46b39bdce8e81405c1c6dd1f672eb7ad29c6bb987282,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744638742394916225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cf396ac2536ba6cd09c4704a163317,},Anno
tations:map[string]string{io.kubernetes.container.hash: 234b098e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c20cd19dc0070f5e1826f79b06507c2ac058052a0ffdaf97e7a81897e1c8af,PodSandboxId:cf24ac5c7cbf45102cf5e19b0cd0a8038c8b36683ea870dc6f6599c328d57b8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744638742396087454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac9bdb70006d8a252912dae3175d343,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d0c3857,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cbbe225b23dee702f0efe6646738893d2c40e41662c93cd6e1c6977bb43f36a,PodSandboxId:38738ebe087f303fb2de051a56094e76902e6e62d83b771f02056fc497b687c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744638742434708245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c077f689161ddf6aa9acdcb5e0c16b59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a22bc6ac09bfd620c98c76039b7361bdd094a6023f7065db8aa7325f404da21,PodSandboxId:fe48affcfd558d4821b336be57338b66409057e07c25e6b2558e4aad24a11b31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744638742412600609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda4ce56c5c4a22335b8ea45e823a434,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcc5bc9e-3ee2-425a-b9a7-46b5d2c68669 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.952532361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c449e938-3942-40d1-9fc2-442ce347a268 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.952599184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c449e938-3942-40d1-9fc2-442ce347a268 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.954001933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb391807-8e1f-4f13-8339-c8f5001d9eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.954686238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638763954476803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb391807-8e1f-4f13-8339-c8f5001d9eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.955258934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fd86e89-1f68-4069-b71e-6c9a9546db60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.955313094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fd86e89-1f68-4069-b71e-6c9a9546db60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:52:43 test-preload-944524 crio[677]: time="2025-04-14 13:52:43.955464396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed28d4c07c9d8022951e19baec861a026484bae3cb22f3bf31426bf11ee7d33f,PodSandboxId:49b89d05087fc548ddc26421ea1bc465d4434763ec62bc53e951cdf8f65485b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744638754804804274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2p94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb,},Annotations:map[string]string{io.kubernetes.container.hash: 207d219b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21eecdd8f7621602a7c2bc3d3611cabd7e52ecce9e024e0965b3f481a02073a,PodSandboxId:f14394a9d49678c5b2a1d1e0e9499b094d2c116e94b630b1a8193a521cc21e39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744638747688425412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgqdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 13fd9f92-d334-4ecd-a25c-71a452dea8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bd3da63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fec2e246ee8583a6209aa5688315e3bf213a40855496691a80062a7af285fc,PodSandboxId:c3dfc91cd2ea72372ce43229ef05b4eb94170cf2aff0028441aac5773c6c2b6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744638747359853432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f
dc0e8c-7d00-47f8-b85e-fb37a6c90300,},Annotations:map[string]string{io.kubernetes.container.hash: 970a4c6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32740b147b9d83203214c6e9dcff0e1c49ed3272c764398f9876b9061a591dae,PodSandboxId:0fb10ab6bf0888231efb46b39bdce8e81405c1c6dd1f672eb7ad29c6bb987282,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744638742394916225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cf396ac2536ba6cd09c4704a163317,},Anno
tations:map[string]string{io.kubernetes.container.hash: 234b098e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c20cd19dc0070f5e1826f79b06507c2ac058052a0ffdaf97e7a81897e1c8af,PodSandboxId:cf24ac5c7cbf45102cf5e19b0cd0a8038c8b36683ea870dc6f6599c328d57b8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744638742396087454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac9bdb70006d8a252912dae3175d343,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d0c3857,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cbbe225b23dee702f0efe6646738893d2c40e41662c93cd6e1c6977bb43f36a,PodSandboxId:38738ebe087f303fb2de051a56094e76902e6e62d83b771f02056fc497b687c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744638742434708245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c077f689161ddf6aa9acdcb5e0c16b59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a22bc6ac09bfd620c98c76039b7361bdd094a6023f7065db8aa7325f404da21,PodSandboxId:fe48affcfd558d4821b336be57338b66409057e07c25e6b2558e4aad24a11b31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744638742412600609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-944524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda4ce56c5c4a22335b8ea45e823a434,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fd86e89-1f68-4069-b71e-6c9a9546db60 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed28d4c07c9d8       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   49b89d05087fc       coredns-6d4b75cb6d-c2p94
	c21eecdd8f762       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   f14394a9d4967       kube-proxy-kgqdm
	09fec2e246ee8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   c3dfc91cd2ea7       storage-provisioner
	9cbbe225b23de       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   38738ebe087f3       kube-scheduler-test-preload-944524
	0a22bc6ac09bf       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   fe48affcfd558       kube-controller-manager-test-preload-944524
	b6c20cd19dc00       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   cf24ac5c7cbf4       kube-apiserver-test-preload-944524
	32740b147b9d8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   0fb10ab6bf088       etcd-test-preload-944524
	
	
	==> coredns [ed28d4c07c9d8022951e19baec861a026484bae3cb22f3bf31426bf11ee7d33f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38237 - 24620 "HINFO IN 5529146847582340578.7233514368077868491. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008089966s
	
	
	==> describe nodes <==
	Name:               test-preload-944524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-944524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88
	                    minikube.k8s.io/name=test-preload-944524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T13_50_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 13:50:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-944524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 13:52:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 13:52:37 +0000   Mon, 14 Apr 2025 13:50:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 13:52:37 +0000   Mon, 14 Apr 2025 13:50:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 13:52:37 +0000   Mon, 14 Apr 2025 13:50:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 13:52:37 +0000   Mon, 14 Apr 2025 13:52:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    test-preload-944524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b04d12cac3fc4eb9b57b71c38dd4291d
	  System UUID:                b04d12ca-c3fc-4eb9-b57b-71c38dd4291d
	  Boot ID:                    7326e381-37ad-448d-95a3-61bb61251f1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c2p94                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     99s
	  kube-system                 etcd-test-preload-944524                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         112s
	  kube-system                 kube-apiserver-test-preload-944524             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-test-preload-944524    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-kgqdm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-test-preload-944524             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 97s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x5 over 2m1s)  kubelet          Node test-preload-944524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x4 over 2m1s)  kubelet          Node test-preload-944524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x4 over 2m1s)  kubelet          Node test-preload-944524 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node test-preload-944524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node test-preload-944524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node test-preload-944524 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                102s               kubelet          Node test-preload-944524 status is now: NodeReady
	  Normal  RegisteredNode           99s                node-controller  Node test-preload-944524 event: Registered Node test-preload-944524 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-944524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-944524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-944524 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-944524 event: Registered Node test-preload-944524 in Controller
	
	
	==> dmesg <==
	[Apr14 13:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051854] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040107] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.902720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.583059] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597792] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 13:52] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.061170] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053866] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197165] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.111552] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.262888] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[ +13.414624] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.055105] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.623062] systemd-fstab-generator[1130]: Ignoring "noauto" option for root device
	[  +5.791426] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.316158] systemd-fstab-generator[1768]: Ignoring "noauto" option for root device
	[  +5.048380] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [32740b147b9d83203214c6e9dcff0e1c49ed3272c764398f9876b9061a591dae] <==
	{"level":"info","ts":"2025-04-14T13:52:22.850Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7dcc3547d111063c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T13:52:22.850Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T13:52:22.851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2025-04-14T13:52:22.853Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2025-04-14T13:52:22.853Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T13:52:22.854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T13:52:22.865Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T13:52:22.865Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2025-04-14T13:52:22.865Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2025-04-14T13:52:22.866Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T13:52:22.868Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2025-04-14T13:52:24.101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2025-04-14T13:52:24.107Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:test-preload-944524 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T13:52:24.107Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T13:52:24.108Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T13:52:24.108Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T13:52:24.108Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T13:52:24.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T13:52:24.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.64:2379"}
	
	
	==> kernel <==
	 13:52:44 up 0 min,  0 users,  load average: 1.04, 0.30, 0.10
	Linux test-preload-944524 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6c20cd19dc0070f5e1826f79b06507c2ac058052a0ffdaf97e7a81897e1c8af] <==
	I0414 13:52:26.474743       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0414 13:52:26.474805       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0414 13:52:26.474828       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0414 13:52:26.474852       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0414 13:52:26.474870       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0414 13:52:26.479886       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0414 13:52:26.650593       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0414 13:52:26.652622       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0414 13:52:26.656064       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0414 13:52:26.657558       1 cache.go:39] Caches are synced for autoregister controller
	I0414 13:52:26.657707       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0414 13:52:26.662320       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 13:52:26.665241       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0414 13:52:26.674690       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0414 13:52:26.706805       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 13:52:27.162482       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0414 13:52:27.468329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 13:52:27.967314       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0414 13:52:28.374314       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0414 13:52:28.388978       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0414 13:52:28.433099       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0414 13:52:28.450873       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 13:52:28.458973       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 13:52:39.111045       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 13:52:39.309877       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a22bc6ac09bfd620c98c76039b7361bdd094a6023f7065db8aa7325f404da21] <==
	I0414 13:52:39.107405       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0414 13:52:39.107499       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-944524. Assuming now as a timestamp.
	I0414 13:52:39.107547       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0414 13:52:39.107850       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0414 13:52:39.108060       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0414 13:52:39.108164       1 event.go:294] "Event occurred" object="test-preload-944524" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-944524 event: Registered Node test-preload-944524 in Controller"
	I0414 13:52:39.108383       1 shared_informer.go:262] Caches are synced for crt configmap
	I0414 13:52:39.111308       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0414 13:52:39.111397       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0414 13:52:39.111516       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0414 13:52:39.111519       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0414 13:52:39.115479       1 shared_informer.go:262] Caches are synced for node
	I0414 13:52:39.115532       1 range_allocator.go:173] Starting range CIDR allocator
	I0414 13:52:39.115558       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0414 13:52:39.115598       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0414 13:52:39.116654       1 shared_informer.go:262] Caches are synced for endpoint
	I0414 13:52:39.119257       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0414 13:52:39.144805       1 shared_informer.go:262] Caches are synced for attach detach
	I0414 13:52:39.236280       1 shared_informer.go:262] Caches are synced for namespace
	I0414 13:52:39.248043       1 shared_informer.go:262] Caches are synced for service account
	I0414 13:52:39.320169       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 13:52:39.368770       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 13:52:39.743490       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 13:52:39.762609       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 13:52:39.762683       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [c21eecdd8f7621602a7c2bc3d3611cabd7e52ecce9e024e0965b3f481a02073a] <==
	I0414 13:52:27.916909       1 node.go:163] Successfully retrieved node IP: 192.168.39.64
	I0414 13:52:27.917160       1 server_others.go:138] "Detected node IP" address="192.168.39.64"
	I0414 13:52:27.917328       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0414 13:52:27.956429       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0414 13:52:27.956447       1 server_others.go:206] "Using iptables Proxier"
	I0414 13:52:27.956525       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0414 13:52:27.957238       1 server.go:661] "Version info" version="v1.24.4"
	I0414 13:52:27.957256       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 13:52:27.958812       1 config.go:317] "Starting service config controller"
	I0414 13:52:27.958867       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0414 13:52:27.958900       1 config.go:226] "Starting endpoint slice config controller"
	I0414 13:52:27.958916       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0414 13:52:27.960877       1 config.go:444] "Starting node config controller"
	I0414 13:52:27.961659       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0414 13:52:28.059277       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0414 13:52:28.059375       1 shared_informer.go:262] Caches are synced for service config
	I0414 13:52:28.061731       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [9cbbe225b23dee702f0efe6646738893d2c40e41662c93cd6e1c6977bb43f36a] <==
	W0414 13:52:26.637714       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 13:52:26.637739       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0414 13:52:26.637788       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 13:52:26.637817       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0414 13:52:26.638045       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 13:52:26.638081       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 13:52:26.638161       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 13:52:26.638241       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0414 13:52:26.638328       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 13:52:26.638352       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0414 13:52:26.638434       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 13:52:26.638457       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0414 13:52:26.638532       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 13:52:26.638556       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0414 13:52:26.638619       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 13:52:26.638642       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0414 13:52:26.638701       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 13:52:26.638724       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0414 13:52:26.638778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 13:52:26.638807       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0414 13:52:26.638873       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 13:52:26.638896       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0414 13:52:26.638986       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 13:52:26.639056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0414 13:52:28.223326       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.663385    1137 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: E0414 13:52:26.666916    1137 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c2p94" podUID=5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: E0414 13:52:26.713661    1137 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723130    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpf4x\" (UniqueName: \"kubernetes.io/projected/13fd9f92-d334-4ecd-a25c-71a452dea8d9-kube-api-access-bpf4x\") pod \"kube-proxy-kgqdm\" (UID: \"13fd9f92-d334-4ecd-a25c-71a452dea8d9\") " pod="kube-system/kube-proxy-kgqdm"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723225    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9zm\" (UniqueName: \"kubernetes.io/projected/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-kube-api-access-sn9zm\") pod \"coredns-6d4b75cb6d-c2p94\" (UID: \"5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb\") " pod="kube-system/coredns-6d4b75cb6d-c2p94"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723251    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjf5\" (UniqueName: \"kubernetes.io/projected/1fdc0e8c-7d00-47f8-b85e-fb37a6c90300-kube-api-access-nfjf5\") pod \"storage-provisioner\" (UID: \"1fdc0e8c-7d00-47f8-b85e-fb37a6c90300\") " pod="kube-system/storage-provisioner"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723281    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume\") pod \"coredns-6d4b75cb6d-c2p94\" (UID: \"5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb\") " pod="kube-system/coredns-6d4b75cb6d-c2p94"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723301    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13fd9f92-d334-4ecd-a25c-71a452dea8d9-xtables-lock\") pod \"kube-proxy-kgqdm\" (UID: \"13fd9f92-d334-4ecd-a25c-71a452dea8d9\") " pod="kube-system/kube-proxy-kgqdm"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723359    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13fd9f92-d334-4ecd-a25c-71a452dea8d9-kube-proxy\") pod \"kube-proxy-kgqdm\" (UID: \"13fd9f92-d334-4ecd-a25c-71a452dea8d9\") " pod="kube-system/kube-proxy-kgqdm"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723376    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13fd9f92-d334-4ecd-a25c-71a452dea8d9-lib-modules\") pod \"kube-proxy-kgqdm\" (UID: \"13fd9f92-d334-4ecd-a25c-71a452dea8d9\") " pod="kube-system/kube-proxy-kgqdm"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723398    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1fdc0e8c-7d00-47f8-b85e-fb37a6c90300-tmp\") pod \"storage-provisioner\" (UID: \"1fdc0e8c-7d00-47f8-b85e-fb37a6c90300\") " pod="kube-system/storage-provisioner"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: I0414 13:52:26.723418    1137 reconciler.go:159] "Reconciler: start to sync state"
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: E0414 13:52:26.827997    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:52:26 test-preload-944524 kubelet[1137]: E0414 13:52:26.828737    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume podName:5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb nodeName:}" failed. No retries permitted until 2025-04-14 13:52:27.328699451 +0000 UTC m=+5.812796784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume") pod "coredns-6d4b75cb6d-c2p94" (UID: "5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb") : object "kube-system"/"coredns" not registered
	Apr 14 13:52:27 test-preload-944524 kubelet[1137]: I0414 13:52:27.052632    1137 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-944524"
	Apr 14 13:52:27 test-preload-944524 kubelet[1137]: I0414 13:52:27.052848    1137 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-944524"
	Apr 14 13:52:27 test-preload-944524 kubelet[1137]: I0414 13:52:27.056293    1137 setters.go:532] "Node became not ready" node="test-preload-944524" condition={Type:Ready Status:False LastHeartbeatTime:2025-04-14 13:52:27.056112361 +0000 UTC m=+5.540209674 LastTransitionTime:2025-04-14 13:52:27.056112361 +0000 UTC m=+5.540209674 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 14 13:52:27 test-preload-944524 kubelet[1137]: E0414 13:52:27.330834    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:52:27 test-preload-944524 kubelet[1137]: E0414 13:52:27.330902    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume podName:5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb nodeName:}" failed. No retries permitted until 2025-04-14 13:52:28.330884033 +0000 UTC m=+6.814981347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume") pod "coredns-6d4b75cb6d-c2p94" (UID: "5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb") : object "kube-system"/"coredns" not registered
	Apr 14 13:52:28 test-preload-944524 kubelet[1137]: E0414 13:52:28.336870    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:52:28 test-preload-944524 kubelet[1137]: E0414 13:52:28.337016    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume podName:5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb nodeName:}" failed. No retries permitted until 2025-04-14 13:52:30.337000738 +0000 UTC m=+8.821098065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume") pod "coredns-6d4b75cb6d-c2p94" (UID: "5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb") : object "kube-system"/"coredns" not registered
	Apr 14 13:52:28 test-preload-944524 kubelet[1137]: E0414 13:52:28.761108    1137 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c2p94" podUID=5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb
	Apr 14 13:52:30 test-preload-944524 kubelet[1137]: E0414 13:52:30.352932    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:52:30 test-preload-944524 kubelet[1137]: E0414 13:52:30.353015    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume podName:5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb nodeName:}" failed. No retries permitted until 2025-04-14 13:52:34.353000899 +0000 UTC m=+12.837098213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb-config-volume") pod "coredns-6d4b75cb6d-c2p94" (UID: "5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb") : object "kube-system"/"coredns" not registered
	Apr 14 13:52:30 test-preload-944524 kubelet[1137]: E0414 13:52:30.761893    1137 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c2p94" podUID=5b1bb0ac-05ef-441c-9f71-9ebbd8f94ddb
	
	
	==> storage-provisioner [09fec2e246ee8583a6209aa5688315e3bf213a40855496691a80062a7af285fc] <==
	I0414 13:52:27.428112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-944524 -n test-preload-944524
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-944524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-944524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-944524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-944524: (1.217246084s)
--- FAIL: TestPreload (208.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (478.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m1.965950182s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-461086] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-461086" primary control-plane node in "kubernetes-upgrade-461086" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:55:53.049321 2224933 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:55:53.049563 2224933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:55:53.049572 2224933 out.go:358] Setting ErrFile to fd 2...
	I0414 13:55:53.049577 2224933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:55:53.049747 2224933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:55:53.050336 2224933 out.go:352] Setting JSON to false
	I0414 13:55:53.051353 2224933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":167892,"bootTime":1744471061,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:55:53.051458 2224933 start.go:139] virtualization: kvm guest
	I0414 13:55:53.053543 2224933 out.go:177] * [kubernetes-upgrade-461086] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:55:53.055127 2224933 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 13:55:53.055151 2224933 notify.go:220] Checking for updates...
	I0414 13:55:53.057404 2224933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:55:53.058678 2224933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:55:53.059718 2224933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:55:53.060627 2224933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:55:53.061651 2224933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:55:53.063308 2224933 config.go:182] Loaded profile config "NoKubernetes-489001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:55:53.063445 2224933 config.go:182] Loaded profile config "cert-expiration-528114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:55:53.063582 2224933 config.go:182] Loaded profile config "offline-crio-468991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:55:53.063700 2224933 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:55:53.100764 2224933 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:55:53.101992 2224933 start.go:297] selected driver: kvm2
	I0414 13:55:53.102008 2224933 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:55:53.102020 2224933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:55:53.102759 2224933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:55:53.102841 2224933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:55:53.118254 2224933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:55:53.118310 2224933 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:55:53.118543 2224933 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 13:55:53.118578 2224933 cni.go:84] Creating CNI manager for ""
	I0414 13:55:53.118626 2224933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:55:53.118635 2224933 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:55:53.118695 2224933 start.go:340] cluster config:
	{Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:55:53.118796 2224933 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:55:53.120410 2224933 out.go:177] * Starting "kubernetes-upgrade-461086" primary control-plane node in "kubernetes-upgrade-461086" cluster
	I0414 13:55:53.121396 2224933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:55:53.121444 2224933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 13:55:53.121459 2224933 cache.go:56] Caching tarball of preloaded images
	I0414 13:55:53.121550 2224933 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:55:53.121563 2224933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 13:55:53.121665 2224933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json ...
	I0414 13:55:53.121689 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json: {Name:mk7c131a70d8035aa70e0544cab1138440e1c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:55:53.121837 2224933 start.go:360] acquireMachinesLock for kubernetes-upgrade-461086: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:56:22.021761 2224933 start.go:364] duration metric: took 28.899885215s to acquireMachinesLock for "kubernetes-upgrade-461086"
	I0414 13:56:22.021879 2224933 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:56:22.022026 2224933 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:56:22.023723 2224933 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 13:56:22.023930 2224933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:56:22.024005 2224933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:56:22.041097 2224933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34431
	I0414 13:56:22.041586 2224933 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:56:22.042181 2224933 main.go:141] libmachine: Using API Version  1
	I0414 13:56:22.042206 2224933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:56:22.042648 2224933 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:56:22.042864 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 13:56:22.043030 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:22.043191 2224933 start.go:159] libmachine.API.Create for "kubernetes-upgrade-461086" (driver="kvm2")
	I0414 13:56:22.043224 2224933 client.go:168] LocalClient.Create starting
	I0414 13:56:22.043258 2224933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 13:56:22.043304 2224933 main.go:141] libmachine: Decoding PEM data...
	I0414 13:56:22.043325 2224933 main.go:141] libmachine: Parsing certificate...
	I0414 13:56:22.043442 2224933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 13:56:22.043472 2224933 main.go:141] libmachine: Decoding PEM data...
	I0414 13:56:22.043496 2224933 main.go:141] libmachine: Parsing certificate...
	I0414 13:56:22.043521 2224933 main.go:141] libmachine: Running pre-create checks...
	I0414 13:56:22.043534 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .PreCreateCheck
	I0414 13:56:22.043949 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetConfigRaw
	I0414 13:56:22.044420 2224933 main.go:141] libmachine: Creating machine...
	I0414 13:56:22.044434 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .Create
	I0414 13:56:22.044615 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) creating KVM machine...
	I0414 13:56:22.044629 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) creating network...
	I0414 13:56:22.045915 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found existing default KVM network
	I0414 13:56:22.046870 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.046666 2225294 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3b:85:4f} reservation:<nil>}
	I0414 13:56:22.047848 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.047760 2225294 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205230}
	I0414 13:56:22.047889 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | created network xml: 
	I0414 13:56:22.047912 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | <network>
	I0414 13:56:22.047924 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   <name>mk-kubernetes-upgrade-461086</name>
	I0414 13:56:22.047940 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   <dns enable='no'/>
	I0414 13:56:22.047951 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   
	I0414 13:56:22.047960 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 13:56:22.047972 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |     <dhcp>
	I0414 13:56:22.047985 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 13:56:22.047996 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |     </dhcp>
	I0414 13:56:22.048017 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   </ip>
	I0414 13:56:22.048029 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG |   
	I0414 13:56:22.048038 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | </network>
	I0414 13:56:22.048049 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | 
	I0414 13:56:22.053681 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | trying to create private KVM network mk-kubernetes-upgrade-461086 192.168.50.0/24...
	I0414 13:56:22.131473 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | private KVM network mk-kubernetes-upgrade-461086 192.168.50.0/24 created
	I0414 13:56:22.131514 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086 ...
	I0414 13:56:22.131529 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.131447 2225294 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:56:22.131547 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:56:22.131659 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:56:22.441449 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.441288 2225294 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa...
	I0414 13:56:22.898536 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.898380 2225294 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/kubernetes-upgrade-461086.rawdisk...
	I0414 13:56:22.898576 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Writing magic tar header
	I0414 13:56:22.898592 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Writing SSH key tar header
	I0414 13:56:22.898686 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:22.898620 2225294 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086 ...
	I0414 13:56:22.898813 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086
	I0414 13:56:22.898837 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086 (perms=drwx------)
	I0414 13:56:22.898858 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 13:56:22.898871 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:56:22.898889 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 13:56:22.898904 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 13:56:22.898917 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:56:22.898932 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:56:22.898943 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:56:22.898956 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 13:56:22.898972 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:56:22.898984 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home/jenkins
	I0414 13:56:22.898996 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | checking permissions on dir: /home
	I0414 13:56:22.899004 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | skipping /home - not owner
	I0414 13:56:22.899016 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) creating domain...
	I0414 13:56:23.202884 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) define libvirt domain using xml: 
	I0414 13:56:23.202922 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) <domain type='kvm'>
	I0414 13:56:23.202936 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <name>kubernetes-upgrade-461086</name>
	I0414 13:56:23.202944 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <memory unit='MiB'>2200</memory>
	I0414 13:56:23.202954 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <vcpu>2</vcpu>
	I0414 13:56:23.202961 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <features>
	I0414 13:56:23.202968 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <acpi/>
	I0414 13:56:23.202990 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <apic/>
	I0414 13:56:23.203004 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <pae/>
	I0414 13:56:23.203011 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     
	I0414 13:56:23.203020 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   </features>
	I0414 13:56:23.203031 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <cpu mode='host-passthrough'>
	I0414 13:56:23.203039 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   
	I0414 13:56:23.203049 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   </cpu>
	I0414 13:56:23.203084 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <os>
	I0414 13:56:23.203125 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <type>hvm</type>
	I0414 13:56:23.203136 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <boot dev='cdrom'/>
	I0414 13:56:23.203143 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <boot dev='hd'/>
	I0414 13:56:23.203152 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <bootmenu enable='no'/>
	I0414 13:56:23.203166 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   </os>
	I0414 13:56:23.203188 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   <devices>
	I0414 13:56:23.203204 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <disk type='file' device='cdrom'>
	I0414 13:56:23.203230 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/boot2docker.iso'/>
	I0414 13:56:23.203250 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <target dev='hdc' bus='scsi'/>
	I0414 13:56:23.203259 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <readonly/>
	I0414 13:56:23.203269 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </disk>
	I0414 13:56:23.203278 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <disk type='file' device='disk'>
	I0414 13:56:23.203291 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:56:23.203307 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/kubernetes-upgrade-461086.rawdisk'/>
	I0414 13:56:23.203320 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <target dev='hda' bus='virtio'/>
	I0414 13:56:23.203329 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </disk>
	I0414 13:56:23.203336 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <interface type='network'>
	I0414 13:56:23.203346 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <source network='mk-kubernetes-upgrade-461086'/>
	I0414 13:56:23.203353 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <model type='virtio'/>
	I0414 13:56:23.203362 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </interface>
	I0414 13:56:23.203369 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <interface type='network'>
	I0414 13:56:23.203380 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <source network='default'/>
	I0414 13:56:23.203437 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <model type='virtio'/>
	I0414 13:56:23.203451 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </interface>
	I0414 13:56:23.203459 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <serial type='pty'>
	I0414 13:56:23.203470 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <target port='0'/>
	I0414 13:56:23.203481 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </serial>
	I0414 13:56:23.203490 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <console type='pty'>
	I0414 13:56:23.203501 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <target type='serial' port='0'/>
	I0414 13:56:23.203512 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </console>
	I0414 13:56:23.203522 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     <rng model='virtio'>
	I0414 13:56:23.203534 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)       <backend model='random'>/dev/random</backend>
	I0414 13:56:23.203546 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     </rng>
	I0414 13:56:23.203554 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     
	I0414 13:56:23.203560 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)     
	I0414 13:56:23.203569 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086)   </devices>
	I0414 13:56:23.203575 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) </domain>
	I0414 13:56:23.203589 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) 
	I0414 13:56:23.211404 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:fc:d8:6c in network default
	I0414 13:56:23.212131 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) starting domain...
	I0414 13:56:23.212156 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:23.212164 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) ensuring networks are active...
	I0414 13:56:23.213224 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Ensuring network default is active
	I0414 13:56:23.213581 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Ensuring network mk-kubernetes-upgrade-461086 is active
	I0414 13:56:23.214140 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) getting domain XML...
	I0414 13:56:23.215027 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) creating domain...
	I0414 13:56:24.741788 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) waiting for IP...
	I0414 13:56:24.743130 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:24.743720 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:24.743738 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:24.743615 2225294 retry.go:31] will retry after 259.303985ms: waiting for domain to come up
	I0414 13:56:25.005433 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.006117 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.006148 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:25.006053 2225294 retry.go:31] will retry after 287.399336ms: waiting for domain to come up
	I0414 13:56:25.295777 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.296415 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.296441 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:25.296377 2225294 retry.go:31] will retry after 485.351909ms: waiting for domain to come up
	I0414 13:56:25.783015 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.783493 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:25.783565 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:25.783489 2225294 retry.go:31] will retry after 432.699896ms: waiting for domain to come up
	I0414 13:56:26.218481 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:26.219080 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:26.219130 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:26.219036 2225294 retry.go:31] will retry after 523.616719ms: waiting for domain to come up
	I0414 13:56:26.745009 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:26.745561 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:26.745689 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:26.745520 2225294 retry.go:31] will retry after 927.983693ms: waiting for domain to come up
	I0414 13:56:27.675127 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:27.675677 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:27.675711 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:27.675603 2225294 retry.go:31] will retry after 918.65731ms: waiting for domain to come up
	I0414 13:56:28.595768 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:28.596233 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:28.596266 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:28.596203 2225294 retry.go:31] will retry after 910.975379ms: waiting for domain to come up
	I0414 13:56:29.508547 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:29.509079 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:29.509144 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:29.509051 2225294 retry.go:31] will retry after 1.240768804s: waiting for domain to come up
	I0414 13:56:30.751425 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:30.751967 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:30.751996 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:30.751927 2225294 retry.go:31] will retry after 1.902225815s: waiting for domain to come up
	I0414 13:56:32.655666 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:32.656213 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:32.656246 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:32.656162 2225294 retry.go:31] will retry after 2.686297233s: waiting for domain to come up
	I0414 13:56:35.346224 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:35.346787 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:35.346817 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:35.346737 2225294 retry.go:31] will retry after 3.141560804s: waiting for domain to come up
	I0414 13:56:38.489793 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:38.490222 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:38.490346 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:38.490220 2225294 retry.go:31] will retry after 3.559781243s: waiting for domain to come up
	I0414 13:56:42.052013 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:42.052409 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 13:56:42.052432 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 13:56:42.052392 2225294 retry.go:31] will retry after 4.433345903s: waiting for domain to come up
	I0414 13:56:46.489784 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.490227 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has current primary IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.490256 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) found domain IP: 192.168.50.41
	I0414 13:56:46.490270 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) reserving static IP address...
	I0414 13:56:46.490693 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-461086", mac: "52:54:00:66:0c:5b", ip: "192.168.50.41"} in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.578139 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) reserved static IP address 192.168.50.41 for domain kubernetes-upgrade-461086
	I0414 13:56:46.578179 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Getting to WaitForSSH function...
	I0414 13:56:46.578188 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) waiting for SSH...
	I0414 13:56:46.580654 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.581083 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:46.581138 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.581259 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH client type: external
	I0414 13:56:46.581289 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa (-rw-------)
	I0414 13:56:46.581339 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:56:46.581358 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | About to run SSH command:
	I0414 13:56:46.581373 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | exit 0
	I0414 13:56:46.713162 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | SSH cmd err, output: <nil>: 
	I0414 13:56:46.713442 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) KVM machine creation complete
	I0414 13:56:46.713834 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetConfigRaw
	I0414 13:56:46.714402 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:46.714661 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:46.714916 2224933 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:56:46.714937 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetState
	I0414 13:56:46.716559 2224933 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:56:46.716574 2224933 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:56:46.716580 2224933 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:56:46.716586 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:46.719552 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.719941 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:46.719974 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.720069 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:46.720258 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.720453 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.720650 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:46.720824 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:46.721134 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:46.721152 2224933 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:56:46.828150 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:56:46.828181 2224933 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:56:46.828194 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:46.831333 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.831674 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:46.831729 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.831843 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:46.832051 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.832211 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.832315 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:46.832474 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:46.832695 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:46.832710 2224933 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:56:46.937880 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:56:46.937951 2224933 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:56:46.937961 2224933 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:56:46.937969 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 13:56:46.938231 2224933 buildroot.go:166] provisioning hostname "kubernetes-upgrade-461086"
	I0414 13:56:46.938264 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 13:56:46.938451 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:46.941101 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.941570 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:46.941602 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:46.941872 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:46.942076 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.942256 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:46.942424 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:46.942612 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:46.942924 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:46.942945 2224933 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-461086 && echo "kubernetes-upgrade-461086" | sudo tee /etc/hostname
	I0414 13:56:47.064613 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-461086
	
	I0414 13:56:47.064657 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.068342 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.068838 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.068874 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.069144 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.069416 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.069607 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.069836 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.070062 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:47.070407 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:47.070436 2224933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-461086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-461086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-461086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:56:47.190271 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:56:47.190311 2224933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 13:56:47.190368 2224933 buildroot.go:174] setting up certificates
	I0414 13:56:47.190380 2224933 provision.go:84] configureAuth start
	I0414 13:56:47.190401 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 13:56:47.190694 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 13:56:47.193755 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.194002 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.194033 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.194190 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.196509 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.196815 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.196844 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.196984 2224933 provision.go:143] copyHostCerts
	I0414 13:56:47.197064 2224933 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 13:56:47.197091 2224933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 13:56:47.197155 2224933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 13:56:47.197290 2224933 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 13:56:47.197301 2224933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 13:56:47.197323 2224933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 13:56:47.197391 2224933 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 13:56:47.197404 2224933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 13:56:47.197440 2224933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 13:56:47.197507 2224933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-461086 san=[127.0.0.1 192.168.50.41 kubernetes-upgrade-461086 localhost minikube]
	I0414 13:56:47.222047 2224933 provision.go:177] copyRemoteCerts
	I0414 13:56:47.222109 2224933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:56:47.222140 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.224916 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.225171 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.225204 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.225339 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.225544 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.225714 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.225855 2224933 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 13:56:47.312618 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:56:47.338037 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 13:56:47.364548 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 13:56:47.392211 2224933 provision.go:87] duration metric: took 201.808587ms to configureAuth
	I0414 13:56:47.392246 2224933 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:56:47.392481 2224933 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:56:47.392580 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.395331 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.395633 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.395681 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.395878 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.396074 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.396266 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.396407 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.396562 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:47.396857 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:47.396895 2224933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:56:47.627150 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:56:47.627179 2224933 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:56:47.627190 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetURL
	I0414 13:56:47.628600 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | using libvirt version 6000000
	I0414 13:56:47.630815 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.631166 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.631206 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.631342 2224933 main.go:141] libmachine: Docker is up and running!
	I0414 13:56:47.631354 2224933 main.go:141] libmachine: Reticulating splines...
	I0414 13:56:47.631362 2224933 client.go:171] duration metric: took 25.588127157s to LocalClient.Create
	I0414 13:56:47.631384 2224933 start.go:167] duration metric: took 25.588198153s to libmachine.API.Create "kubernetes-upgrade-461086"
	I0414 13:56:47.631395 2224933 start.go:293] postStartSetup for "kubernetes-upgrade-461086" (driver="kvm2")
	I0414 13:56:47.631405 2224933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:56:47.631422 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:47.631649 2224933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:56:47.631683 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.634023 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.634433 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.634462 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.634767 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.634928 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.635143 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.635320 2224933 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 13:56:47.715808 2224933 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:56:47.720398 2224933 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:56:47.720425 2224933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 13:56:47.720485 2224933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 13:56:47.720555 2224933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 13:56:47.720666 2224933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:56:47.732126 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 13:56:47.756684 2224933 start.go:296] duration metric: took 125.270304ms for postStartSetup
	I0414 13:56:47.756777 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetConfigRaw
	I0414 13:56:47.757428 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 13:56:47.760260 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.760602 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.760646 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.760881 2224933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json ...
	I0414 13:56:47.761063 2224933 start.go:128] duration metric: took 25.739022779s to createHost
	I0414 13:56:47.761087 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.763499 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.763852 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.763879 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.764071 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.764258 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.764418 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.764545 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.764687 2224933 main.go:141] libmachine: Using SSH client type: native
	I0414 13:56:47.764936 2224933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 13:56:47.764948 2224933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:56:47.877452 2224933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639007.857745371
	
	I0414 13:56:47.877484 2224933 fix.go:216] guest clock: 1744639007.857745371
	I0414 13:56:47.877495 2224933 fix.go:229] Guest: 2025-04-14 13:56:47.857745371 +0000 UTC Remote: 2025-04-14 13:56:47.761075978 +0000 UTC m=+54.750961211 (delta=96.669393ms)
	I0414 13:56:47.877543 2224933 fix.go:200] guest clock delta is within tolerance: 96.669393ms
	I0414 13:56:47.877551 2224933 start.go:83] releasing machines lock for "kubernetes-upgrade-461086", held for 25.855734716s
	I0414 13:56:47.877587 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:47.877893 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 13:56:47.881177 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.881640 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.881669 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.881975 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:47.882714 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:47.882950 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 13:56:47.883070 2224933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:56:47.883118 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.883163 2224933 ssh_runner.go:195] Run: cat /version.json
	I0414 13:56:47.883197 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 13:56:47.886458 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.886840 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.886897 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.886922 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.887173 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.887283 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:47.887308 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:47.887536 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.887715 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.887851 2224933 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 13:56:47.887900 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 13:56:47.888042 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 13:56:47.888214 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 13:56:47.888398 2224933 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 13:56:47.996244 2224933 ssh_runner.go:195] Run: systemctl --version
	I0414 13:56:48.002469 2224933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:56:48.167355 2224933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:56:48.174638 2224933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:56:48.174733 2224933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:56:48.194474 2224933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:56:48.194500 2224933 start.go:495] detecting cgroup driver to use...
	I0414 13:56:48.194560 2224933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:56:48.212507 2224933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:56:48.231389 2224933 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:56:48.231470 2224933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:56:48.247317 2224933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:56:48.262933 2224933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:56:48.408054 2224933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:56:48.569663 2224933 docker.go:233] disabling docker service ...
	I0414 13:56:48.569761 2224933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:56:48.593595 2224933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:56:48.608691 2224933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:56:48.761162 2224933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:56:48.900642 2224933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:56:48.916963 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:56:48.938166 2224933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 13:56:48.938238 2224933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:56:48.951288 2224933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:56:48.951369 2224933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:56:48.964629 2224933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:56:48.975998 2224933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:56:48.987938 2224933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:56:49.000212 2224933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:56:49.009854 2224933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:56:49.009924 2224933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:56:49.023752 2224933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:56:49.034207 2224933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:56:49.148848 2224933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:56:49.245829 2224933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:56:49.245938 2224933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:56:49.250941 2224933 start.go:563] Will wait 60s for crictl version
	I0414 13:56:49.250998 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:49.254657 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:56:49.296971 2224933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:56:49.297087 2224933 ssh_runner.go:195] Run: crio --version
	I0414 13:56:49.327703 2224933 ssh_runner.go:195] Run: crio --version
	I0414 13:56:49.360674 2224933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 13:56:49.361872 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 13:56:49.364596 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:49.364990 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 14:56:39 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 13:56:49.365017 2224933 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 13:56:49.365182 2224933 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 13:56:49.369594 2224933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:56:49.382936 2224933 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:56:49.383050 2224933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:56:49.383101 2224933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:56:49.418351 2224933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:56:49.418425 2224933 ssh_runner.go:195] Run: which lz4
	I0414 13:56:49.422585 2224933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:56:49.427195 2224933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:56:49.427231 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 13:56:51.073347 2224933 crio.go:462] duration metric: took 1.650786407s to copy over tarball
	I0414 13:56:51.073426 2224933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:56:53.739880 2224933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.666419222s)
	I0414 13:56:53.739915 2224933 crio.go:469] duration metric: took 2.666532255s to extract the tarball
	I0414 13:56:53.739927 2224933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:56:53.783649 2224933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:56:53.835603 2224933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:56:53.835631 2224933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:56:53.835729 2224933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:56:53.835757 2224933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 13:56:53.835729 2224933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 13:56:53.835764 2224933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:53.835869 2224933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:53.835730 2224933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:53.836002 2224933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:53.836101 2224933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:53.837386 2224933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:53.837493 2224933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:53.837516 2224933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 13:56:53.837573 2224933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:53.837602 2224933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:56:53.837643 2224933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:53.837764 2224933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 13:56:53.837832 2224933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.001107 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:54.003680 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.034286 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 13:56:54.056258 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:54.063083 2224933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 13:56:54.063130 2224933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:54.063173 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.072268 2224933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 13:56:54.072315 2224933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.072361 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.120124 2224933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 13:56:54.120180 2224933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 13:56:54.120231 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.125672 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:54.125709 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.125777 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:56:54.125943 2224933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 13:56:54.125995 2224933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:54.126037 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.195683 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:54.197109 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.197127 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:54.203973 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:56:54.204005 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:54.209108 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:54.235387 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 13:56:54.363799 2224933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 13:56:54.363860 2224933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:54.363870 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:56:54.363888 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.363992 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:54.363999 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:56:54.364044 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:56:54.383443 2224933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 13:56:54.383515 2224933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:54.383580 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.402032 2224933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 13:56:54.402093 2224933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 13:56:54.402144 2224933 ssh_runner.go:195] Run: which crictl
	I0414 13:56:54.476263 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:54.476280 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 13:56:54.476296 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 13:56:54.486722 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:56:54.486770 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:54.486783 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:56:54.486790 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 13:56:54.534051 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:54.589095 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 13:56:54.589201 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:54.589694 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:56:54.625394 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:56:54.653023 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:56:54.669449 2224933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:56:54.732938 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 13:56:54.732935 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 13:56:54.740350 2224933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 13:56:56.631432 2224933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:56:56.780504 2224933 cache_images.go:92] duration metric: took 2.94485326s to LoadCachedImages
	W0414 13:56:56.780608 2224933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0414 13:56:56.780628 2224933 kubeadm.go:934] updating node { 192.168.50.41 8443 v1.20.0 crio true true} ...
	I0414 13:56:56.780783 2224933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-461086 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:56:56.780871 2224933 ssh_runner.go:195] Run: crio config
	I0414 13:56:56.840569 2224933 cni.go:84] Creating CNI manager for ""
	I0414 13:56:56.840597 2224933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:56:56.840612 2224933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:56:56.840631 2224933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-461086 NodeName:kubernetes-upgrade-461086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 13:56:56.840835 2224933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-461086"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:56:56.840913 2224933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 13:56:56.854308 2224933 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:56:56.854394 2224933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:56:56.864586 2224933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0414 13:56:56.882649 2224933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:56:56.899012 2224933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 13:56:56.915242 2224933 ssh_runner.go:195] Run: grep 192.168.50.41	control-plane.minikube.internal$ /etc/hosts
	I0414 13:56:56.919006 2224933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:56:56.933969 2224933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:56:57.082615 2224933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:56:57.101300 2224933 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086 for IP: 192.168.50.41
	I0414 13:56:57.101327 2224933 certs.go:194] generating shared ca certs ...
	I0414 13:56:57.101351 2224933 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:57.101528 2224933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 13:56:57.101582 2224933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 13:56:57.101597 2224933 certs.go:256] generating profile certs ...
	I0414 13:56:57.101687 2224933 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.key
	I0414 13:56:57.101720 2224933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.crt with IP's: []
	I0414 13:56:57.567750 2224933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.crt ...
	I0414 13:56:57.567783 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.crt: {Name:mkf36b5d116627a0f05f98b9bd4a940b04503b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:57.567977 2224933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.key ...
	I0414 13:56:57.567995 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.key: {Name:mk341673daa7730be6063c53fc7af7ad163fd7cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:57.568112 2224933 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6
	I0414 13:56:57.568137 2224933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt.105b5bc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.41]
	I0414 13:56:58.115714 2224933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt.105b5bc6 ...
	I0414 13:56:58.115766 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt.105b5bc6: {Name:mke2af8ed1031bfe547522ed2f0cfc22c89dc2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:58.116060 2224933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6 ...
	I0414 13:56:58.116096 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6: {Name:mk13e3b358e2f89dff6dbefbf85788ba8cd5b694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:58.116240 2224933 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt.105b5bc6 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt
	I0414 13:56:58.116352 2224933 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key
	I0414 13:56:58.116437 2224933 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key
	I0414 13:56:58.116463 2224933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt with IP's: []
	I0414 13:56:58.447144 2224933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt ...
	I0414 13:56:58.447181 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt: {Name:mkb97aaf2d257f8b8a79f963c8abb241b02c7418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:58.447352 2224933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key ...
	I0414 13:56:58.447366 2224933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key: {Name:mkfeee1a27b97a80af4e13bf00c8ad96c66adc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:56:58.447555 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 13:56:58.447596 2224933 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 13:56:58.447606 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:56:58.447630 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:56:58.447652 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:56:58.447675 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 13:56:58.447712 2224933 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 13:56:58.448358 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:56:58.478502 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 13:56:58.506359 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:56:58.534224 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:56:58.562445 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 13:56:58.590040 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 13:56:58.618213 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:56:58.646807 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 13:56:58.676906 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 13:56:58.704185 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 13:56:58.737411 2224933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:56:58.763681 2224933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:56:58.781677 2224933 ssh_runner.go:195] Run: openssl version
	I0414 13:56:58.787991 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 13:56:58.798699 2224933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 13:56:58.803515 2224933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 13:56:58.803584 2224933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 13:56:58.809568 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 13:56:58.820120 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 13:56:58.838201 2224933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 13:56:58.844495 2224933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 13:56:58.844558 2224933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 13:56:58.852266 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:56:58.879075 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:56:58.893833 2224933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:56:58.900645 2224933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:56:58.900715 2224933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:56:58.907864 2224933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:56:58.922966 2224933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:56:58.927869 2224933 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:56:58.927943 2224933 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:56:58.928036 2224933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:56:58.928096 2224933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:56:58.977806 2224933 cri.go:89] found id: ""
	I0414 13:56:58.977907 2224933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:56:58.989954 2224933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:56:58.999956 2224933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:56:59.009904 2224933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:56:59.009924 2224933 kubeadm.go:157] found existing configuration files:
	
	I0414 13:56:59.009972 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:56:59.019190 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:56:59.019252 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:56:59.028788 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:56:59.037941 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:56:59.037996 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:56:59.047426 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:56:59.057180 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:56:59.057232 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:56:59.067770 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:56:59.078611 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:56:59.078683 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:56:59.092019 2224933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:56:59.261374 2224933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:56:59.261699 2224933 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:56:59.453993 2224933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:56:59.454201 2224933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:56:59.454335 2224933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:56:59.677954 2224933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:56:59.679844 2224933 out.go:235]   - Generating certificates and keys ...
	I0414 13:56:59.679968 2224933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:56:59.680078 2224933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:56:59.867200 2224933 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:56:59.939552 2224933 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:57:00.115560 2224933 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:57:00.234984 2224933 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:57:00.589712 2224933 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:57:00.590145 2224933 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	I0414 13:57:00.769263 2224933 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:57:00.769693 2224933 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	I0414 13:57:01.151346 2224933 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:57:01.345117 2224933 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:57:01.463699 2224933 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:57:01.463908 2224933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:57:01.617198 2224933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:57:01.876318 2224933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:57:02.079707 2224933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:57:02.221205 2224933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:57:02.240382 2224933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:57:02.241270 2224933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:57:02.241371 2224933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:57:02.384668 2224933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:57:02.387364 2224933 out.go:235]   - Booting up control plane ...
	I0414 13:57:02.387504 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:57:02.396774 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:57:02.400257 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:57:02.401420 2224933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:57:02.406233 2224933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:57:42.403834 2224933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:57:42.405352 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:57:42.405615 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:57:47.406397 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:57:47.406624 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:57:57.406954 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:57:57.407206 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:58:17.408499 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:58:17.408767 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:58:57.408362 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:58:57.408658 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:58:57.408686 2224933 kubeadm.go:310] 
	I0414 13:58:57.408739 2224933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:58:57.408788 2224933 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:58:57.408796 2224933 kubeadm.go:310] 
	I0414 13:58:57.408838 2224933 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:58:57.408916 2224933 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:58:57.409036 2224933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:58:57.409050 2224933 kubeadm.go:310] 
	I0414 13:58:57.409162 2224933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:58:57.409227 2224933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:58:57.409265 2224933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:58:57.409272 2224933 kubeadm.go:310] 
	I0414 13:58:57.409367 2224933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:58:57.409473 2224933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:58:57.409483 2224933 kubeadm.go:310] 
	I0414 13:58:57.409602 2224933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:58:57.409723 2224933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:58:57.409822 2224933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:58:57.409932 2224933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:58:57.409944 2224933 kubeadm.go:310] 
	I0414 13:58:57.410798 2224933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:58:57.410890 2224933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:58:57.410985 2224933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 13:58:57.411187 2224933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-461086 localhost] and IPs [192.168.50.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 13:58:57.411239 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 13:58:57.880868 2224933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:58:57.896327 2224933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:58:57.906839 2224933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:58:57.906872 2224933 kubeadm.go:157] found existing configuration files:
	
	I0414 13:58:57.906932 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:58:57.916690 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:58:57.916821 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:58:57.926529 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:58:57.936715 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:58:57.936803 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:58:57.946824 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:58:57.957154 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:58:57.957219 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:58:57.966858 2224933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:58:57.976425 2224933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:58:57.976490 2224933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:58:57.986536 2224933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:58:58.193801 2224933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:00:54.213648 2224933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:00:54.213756 2224933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:00:54.216026 2224933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:00:54.216102 2224933 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:00:54.216232 2224933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:00:54.216394 2224933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:00:54.216527 2224933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:00:54.216619 2224933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:00:54.218170 2224933 out.go:235]   - Generating certificates and keys ...
	I0414 14:00:54.218303 2224933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:00:54.218402 2224933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:00:54.218509 2224933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 14:00:54.218592 2224933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 14:00:54.218698 2224933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 14:00:54.218779 2224933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 14:00:54.218865 2224933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 14:00:54.218956 2224933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 14:00:54.219081 2224933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 14:00:54.219252 2224933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 14:00:54.219334 2224933 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 14:00:54.219424 2224933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:00:54.219497 2224933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:00:54.219581 2224933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:00:54.219661 2224933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:00:54.219727 2224933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:00:54.219875 2224933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:00:54.220018 2224933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:00:54.220105 2224933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:00:54.220207 2224933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:00:54.221684 2224933 out.go:235]   - Booting up control plane ...
	I0414 14:00:54.221793 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:00:54.221916 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:00:54.222047 2224933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:00:54.222153 2224933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:00:54.222378 2224933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:00:54.222451 2224933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:00:54.222537 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:00:54.222793 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:00:54.222922 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:00:54.223190 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:00:54.223285 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:00:54.223515 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:00:54.223605 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:00:54.223827 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:00:54.223929 2224933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:00:54.224175 2224933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:00:54.224193 2224933 kubeadm.go:310] 
	I0414 14:00:54.224252 2224933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:00:54.224311 2224933 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:00:54.224322 2224933 kubeadm.go:310] 
	I0414 14:00:54.224383 2224933 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:00:54.224436 2224933 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:00:54.224571 2224933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:00:54.224580 2224933 kubeadm.go:310] 
	I0414 14:00:54.224719 2224933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:00:54.224791 2224933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:00:54.224829 2224933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:00:54.224842 2224933 kubeadm.go:310] 
	I0414 14:00:54.224981 2224933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:00:54.225100 2224933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:00:54.225110 2224933 kubeadm.go:310] 
	I0414 14:00:54.225271 2224933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:00:54.225407 2224933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:00:54.225530 2224933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:00:54.225645 2224933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:00:54.225742 2224933 kubeadm.go:310] 
	I0414 14:00:54.225755 2224933 kubeadm.go:394] duration metric: took 3m55.297825629s to StartCluster
	I0414 14:00:54.225818 2224933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:00:54.225900 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:00:54.285643 2224933 cri.go:89] found id: ""
	I0414 14:00:54.285682 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.285695 2224933 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:00:54.285703 2224933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:00:54.285778 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:00:54.327781 2224933 cri.go:89] found id: ""
	I0414 14:00:54.327819 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.327831 2224933 logs.go:284] No container was found matching "etcd"
	I0414 14:00:54.327839 2224933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:00:54.327929 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:00:54.370556 2224933 cri.go:89] found id: ""
	I0414 14:00:54.370591 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.370602 2224933 logs.go:284] No container was found matching "coredns"
	I0414 14:00:54.370619 2224933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:00:54.370687 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:00:54.411742 2224933 cri.go:89] found id: ""
	I0414 14:00:54.411779 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.411792 2224933 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:00:54.411802 2224933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:00:54.411887 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:00:54.453985 2224933 cri.go:89] found id: ""
	I0414 14:00:54.454020 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.454031 2224933 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:00:54.454041 2224933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:00:54.454104 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:00:54.495146 2224933 cri.go:89] found id: ""
	I0414 14:00:54.495203 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.495215 2224933 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:00:54.495225 2224933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:00:54.495296 2224933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:00:54.532834 2224933 cri.go:89] found id: ""
	I0414 14:00:54.532867 2224933 logs.go:282] 0 containers: []
	W0414 14:00:54.532877 2224933 logs.go:284] No container was found matching "kindnet"
	I0414 14:00:54.532891 2224933 logs.go:123] Gathering logs for kubelet ...
	I0414 14:00:54.532908 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:00:54.598879 2224933 logs.go:123] Gathering logs for dmesg ...
	I0414 14:00:54.598926 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:00:54.615604 2224933 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:00:54.615645 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:00:54.778485 2224933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:00:54.778516 2224933 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:00:54.778537 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:00:54.902604 2224933 logs.go:123] Gathering logs for container status ...
	I0414 14:00:54.902645 2224933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 14:00:54.958247 2224933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:00:54.958338 2224933 out.go:270] * 
	* 
	W0414 14:00:54.958414 2224933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:00:54.958432 2224933 out.go:270] * 
	* 
	W0414 14:00:54.959464 2224933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:00:54.962285 2224933 out.go:201] 
	W0414 14:00:54.963342 2224933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:00:54.963385 2224933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:00:54.963403 2224933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:00:54.964749 2224933 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-461086
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-461086: (3.535243578s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-461086 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-461086 status --format={{.Host}}: exit status 7 (80.145779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.450087759s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-461086 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.033427ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-461086] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-461086
	    minikube start -p kubernetes-upgrade-461086 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4610862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-461086 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0414 14:02:25.776942 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-461086 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m45.717036973s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-14 14:03:47.973674799 +0000 UTC m=+4239.778431535
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-461086 -n kubernetes-upgrade-461086
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-461086 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-461086 logs -n 25: (2.000495758s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-793608 sudo                                 | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status containerd                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                 | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat containerd                              |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                             | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /lib/systemd/system/containerd.service                |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                             | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                 | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | containerd config dump                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                 | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status crio --all                           |                           |         |         |                     |                     |
	|         | --full --no-pager                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                 | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo find                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo crio                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-793608                                      | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p force-systemd-flag-509258                          | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-461086                          | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p pause-648153                                       | pause-648153              | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-742924                             | running-upgrade-742924    | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p kubernetes-upgrade-461086                          | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:02 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-954411                             | old-k8s-version-954411    | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-509258 ssh cat                     | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-509258                          | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	| start   | -p no-preload-496809                                  | no-preload-496809         | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:03 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	| delete  | -p pause-648153                                       | pause-648153              | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	| start   | -p embed-certs-242761                                 | embed-certs-242761        | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:03 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-461086                          | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:02 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-461086                          | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:02 UTC | 14 Apr 25 14:03 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-242761           | embed-certs-242761        | jenkins | v1.35.0 | 14 Apr 25 14:03 UTC | 14 Apr 25 14:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-242761                                 | embed-certs-242761        | jenkins | v1.35.0 | 14 Apr 25 14:03 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:02:02
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:02:02.308444 2232414 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:02:02.308624 2232414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:02:02.308638 2232414 out.go:358] Setting ErrFile to fd 2...
	I0414 14:02:02.308645 2232414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:02:02.308982 2232414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:02:02.309807 2232414 out.go:352] Setting JSON to false
	I0414 14:02:02.311291 2232414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168261,"bootTime":1744471061,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:02:02.311451 2232414 start.go:139] virtualization: kvm guest
	I0414 14:02:02.313304 2232414 out.go:177] * [kubernetes-upgrade-461086] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:02:02.314652 2232414 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:02:02.314644 2232414 notify.go:220] Checking for updates...
	I0414 14:02:02.316959 2232414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:02:02.318119 2232414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:02:02.319476 2232414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:02:02.320782 2232414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:02:02.321956 2232414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:02:02.323742 2232414 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:02:02.324426 2232414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:02:02.324560 2232414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:02:02.341572 2232414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0414 14:02:02.342131 2232414 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:02:02.342712 2232414 main.go:141] libmachine: Using API Version  1
	I0414 14:02:02.342732 2232414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:02:02.343154 2232414 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:02:02.343343 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:02:02.343642 2232414 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:02:02.344004 2232414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:02:02.344053 2232414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:02:02.359788 2232414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0414 14:02:02.360394 2232414 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:02:02.360890 2232414 main.go:141] libmachine: Using API Version  1
	I0414 14:02:02.360920 2232414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:02:02.361285 2232414 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:02:02.361508 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:02:02.396038 2232414 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:02:02.397236 2232414 start.go:297] selected driver: kvm2
	I0414 14:02:02.397257 2232414 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:02:02.397390 2232414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:02:02.398461 2232414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:02:02.398565 2232414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:02:02.414780 2232414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:02:02.415185 2232414 cni.go:84] Creating CNI manager for ""
	I0414 14:02:02.415241 2232414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:02:02.415288 2232414 start.go:340] cluster config:
	{Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:02:02.415395 2232414 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:02:02.417009 2232414 out.go:177] * Starting "kubernetes-upgrade-461086" primary control-plane node in "kubernetes-upgrade-461086" cluster
	I0414 14:02:05.503464 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.504013 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has current primary IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.504042 2231425 main.go:141] libmachine: (old-k8s-version-954411) found domain IP: 192.168.39.90
	I0414 14:02:05.504053 2231425 main.go:141] libmachine: (old-k8s-version-954411) reserving static IP address...
	I0414 14:02:05.504350 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-954411", mac: "52:54:00:e4:99:d7", ip: "192.168.39.90"} in network mk-old-k8s-version-954411
	I0414 14:02:05.589888 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Getting to WaitForSSH function...
	I0414 14:02:05.589928 2231425 main.go:141] libmachine: (old-k8s-version-954411) reserved static IP address 192.168.39.90 for domain old-k8s-version-954411
	I0414 14:02:05.589942 2231425 main.go:141] libmachine: (old-k8s-version-954411) waiting for SSH...
	I0414 14:02:05.593022 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.593446 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.593475 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.593616 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH client type: external
	I0414 14:02:05.593648 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa (-rw-------)
	I0414 14:02:05.593692 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:02:05.593708 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | About to run SSH command:
	I0414 14:02:05.593719 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | exit 0
	I0414 14:02:05.721121 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | SSH cmd err, output: <nil>: 
	I0414 14:02:05.721417 2231425 main.go:141] libmachine: (old-k8s-version-954411) KVM machine creation complete
	I0414 14:02:05.721714 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:02:05.722325 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:05.722519 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:05.722666 2231425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:02:05.722679 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetState
	I0414 14:02:05.723891 2231425 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:02:05.723906 2231425 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:02:05.723913 2231425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:02:05.723921 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.726302 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.726658 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.726689 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.726828 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.727025 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.727169 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.727287 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.727485 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.727810 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.727823 2231425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:02:05.836170 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:05.836199 2231425 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:02:05.836207 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.839446 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.839864 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.839893 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.840067 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.840266 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.840418 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.840577 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.840722 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.840967 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.840979 2231425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:02:05.949867 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:02:05.949956 2231425 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:02:05.949969 2231425 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:02:05.949980 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:05.950250 2231425 buildroot.go:166] provisioning hostname "old-k8s-version-954411"
	I0414 14:02:05.950288 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:05.950465 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.953094 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.953502 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.953539 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.953661 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.953876 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.954036 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.954254 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.954445 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.954805 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.954835 2231425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-954411 && echo "old-k8s-version-954411" | sudo tee /etc/hostname
	I0414 14:02:06.076251 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-954411
	
	I0414 14:02:06.076292 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.080415 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.080847 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.080896 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.081131 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.081336 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.081508 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.081666 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.081868 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.082163 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.082187 2231425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-954411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-954411/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-954411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:02:06.198986 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:06.199059 2231425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:02:06.199095 2231425 buildroot.go:174] setting up certificates
	I0414 14:02:06.199119 2231425 provision.go:84] configureAuth start
	I0414 14:02:06.199137 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:06.199506 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:06.202609 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.203013 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.203051 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.203181 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.205535 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.205856 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.205897 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.206020 2231425 provision.go:143] copyHostCerts
	I0414 14:02:06.206100 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:02:06.206125 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:02:06.206204 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:02:06.206322 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:02:06.206333 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:02:06.206366 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:02:06.206441 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:02:06.206451 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:02:06.206479 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:02:06.206546 2231425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-954411 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-954411]
	I0414 14:02:06.366218 2231425 provision.go:177] copyRemoteCerts
	I0414 14:02:06.366301 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:02:06.366340 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.369647 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.370019 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.370054 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.370246 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.370475 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.370666 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.370826 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:06.455796 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:02:06.482620 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 14:02:06.507961 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:02:06.534083 2231425 provision.go:87] duration metric: took 334.941964ms to configureAuth
	I0414 14:02:06.534127 2231425 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:02:06.534326 2231425 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:02:06.534404 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.537117 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.537459 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.537515 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.537675 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.537900 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.538105 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.538265 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.538432 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.538667 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.538683 2231425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:02:06.764766 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:02:06.764796 2231425 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:02:06.764805 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetURL
	I0414 14:02:06.766384 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | using libvirt version 6000000
	I0414 14:02:06.768517 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.768966 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.769028 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.769171 2231425 main.go:141] libmachine: Docker is up and running!
	I0414 14:02:06.769190 2231425 main.go:141] libmachine: Reticulating splines...
	I0414 14:02:06.769199 2231425 client.go:171] duration metric: took 24.35847885s to LocalClient.Create
	I0414 14:02:06.769221 2231425 start.go:167] duration metric: took 24.358553067s to libmachine.API.Create "old-k8s-version-954411"
	I0414 14:02:06.769228 2231425 start.go:293] postStartSetup for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:02:06.769237 2231425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:02:06.769255 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:06.769520 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:02:06.769548 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.772067 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.772471 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.772503 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.772718 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.772928 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.773093 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.773260 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:06.855695 2231425 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:02:06.860172 2231425 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:02:06.860224 2231425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:02:06.860312 2231425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:02:06.860410 2231425 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:02:06.860522 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:02:06.870132 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:06.893118 2231425 start.go:296] duration metric: took 123.876511ms for postStartSetup
	I0414 14:02:06.893177 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:02:06.893787 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:06.896471 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.896752 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.896789 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.897083 2231425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json ...
	I0414 14:02:06.897288 2231425 start.go:128] duration metric: took 24.51158765s to createHost
	I0414 14:02:06.897315 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.899533 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.899839 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.899879 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.899988 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.900155 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.900310 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.900421 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.900559 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.900832 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.900844 2231425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:02:07.010367 2231816 start.go:364] duration metric: took 31.746255284s to acquireMachinesLock for "no-preload-496809"
	I0414 14:02:07.010439 2231816 start.go:93] Provisioning new machine with config: &{Name:no-preload-496809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:no-preload-496809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:02:07.010561 2231816 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:02:02.418012 2232414 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:02:02.418054 2232414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:02:02.418067 2232414 cache.go:56] Caching tarball of preloaded images
	I0414 14:02:02.418159 2232414 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:02:02.418171 2232414 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:02:02.418271 2232414 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json ...
	I0414 14:02:02.418467 2232414 start.go:360] acquireMachinesLock for kubernetes-upgrade-461086: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:02:07.010129 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639326.991378871
	
	I0414 14:02:07.010176 2231425 fix.go:216] guest clock: 1744639326.991378871
	I0414 14:02:07.010188 2231425 fix.go:229] Guest: 2025-04-14 14:02:06.991378871 +0000 UTC Remote: 2025-04-14 14:02:06.897300925 +0000 UTC m=+60.018632384 (delta=94.077946ms)
	I0414 14:02:07.010242 2231425 fix.go:200] guest clock delta is within tolerance: 94.077946ms
	I0414 14:02:07.010253 2231425 start.go:83] releasing machines lock for "old-k8s-version-954411", held for 24.624766435s
	I0414 14:02:07.010296 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.010630 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:07.013833 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.014282 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.014303 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.014571 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015116 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015328 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015453 2231425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:02:07.015500 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:07.015600 2231425 ssh_runner.go:195] Run: cat /version.json
	I0414 14:02:07.015631 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:07.018330 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018676 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.018706 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018725 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018814 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:07.018995 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:07.019178 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:07.019193 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.019222 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.019346 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:07.019391 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:07.019510 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:07.019640 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:07.019783 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:07.098922 2231425 ssh_runner.go:195] Run: systemctl --version
	I0414 14:02:07.127540 2231425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:02:07.297830 2231425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:02:07.305518 2231425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:02:07.305596 2231425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:02:07.322772 2231425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:02:07.322805 2231425 start.go:495] detecting cgroup driver to use...
	I0414 14:02:07.322887 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:02:07.338886 2231425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:02:07.354169 2231425 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:02:07.354246 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:02:07.370147 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:02:07.386422 2231425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:02:07.503201 2231425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:02:07.642194 2231425 docker.go:233] disabling docker service ...
	I0414 14:02:07.642266 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:02:07.657618 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:02:07.671661 2231425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:02:07.805253 2231425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:02:07.936806 2231425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:02:07.955665 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:02:07.977834 2231425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 14:02:07.977898 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:07.990144 2231425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:02:07.990219 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.001051 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.011866 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.022831 2231425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:02:08.034294 2231425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:02:08.044252 2231425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:02:08.044309 2231425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:02:08.057115 2231425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:02:08.067172 2231425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:08.181004 2231425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:02:08.287832 2231425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:02:08.287922 2231425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:02:08.293140 2231425 start.go:563] Will wait 60s for crictl version
	I0414 14:02:08.293201 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:08.297185 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:02:08.350602 2231425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:02:08.350693 2231425 ssh_runner.go:195] Run: crio --version
	I0414 14:02:08.380823 2231425 ssh_runner.go:195] Run: crio --version
	I0414 14:02:08.415360 2231425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 14:02:07.012203 2231816 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:02:07.012399 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:02:07.012462 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:02:07.030518 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0414 14:02:07.031137 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:02:07.031697 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:02:07.031735 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:02:07.032095 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:02:07.032271 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetMachineName
	I0414 14:02:07.032395 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:07.032528 2231816 start.go:159] libmachine.API.Create for "no-preload-496809" (driver="kvm2")
	I0414 14:02:07.032554 2231816 client.go:168] LocalClient.Create starting
	I0414 14:02:07.032592 2231816 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:02:07.032635 2231816 main.go:141] libmachine: Decoding PEM data...
	I0414 14:02:07.032658 2231816 main.go:141] libmachine: Parsing certificate...
	I0414 14:02:07.032781 2231816 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:02:07.032815 2231816 main.go:141] libmachine: Decoding PEM data...
	I0414 14:02:07.032833 2231816 main.go:141] libmachine: Parsing certificate...
	I0414 14:02:07.032861 2231816 main.go:141] libmachine: Running pre-create checks...
	I0414 14:02:07.032876 2231816 main.go:141] libmachine: (no-preload-496809) Calling .PreCreateCheck
	I0414 14:02:07.033286 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetConfigRaw
	I0414 14:02:07.033729 2231816 main.go:141] libmachine: Creating machine...
	I0414 14:02:07.033744 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Create
	I0414 14:02:07.033892 2231816 main.go:141] libmachine: (no-preload-496809) creating KVM machine...
	I0414 14:02:07.033910 2231816 main.go:141] libmachine: (no-preload-496809) creating network...
	I0414 14:02:07.035373 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found existing default KVM network
	I0414 14:02:07.036410 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.036206 2232495 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:97:78} reservation:<nil>}
	I0414 14:02:07.037112 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.037012 2232495 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a7:07:73} reservation:<nil>}
	I0414 14:02:07.037803 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.037720 2232495 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209ec0}
	I0414 14:02:07.037866 2231816 main.go:141] libmachine: (no-preload-496809) DBG | created network xml: 
	I0414 14:02:07.037894 2231816 main.go:141] libmachine: (no-preload-496809) DBG | <network>
	I0414 14:02:07.037909 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   <name>mk-no-preload-496809</name>
	I0414 14:02:07.037921 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   <dns enable='no'/>
	I0414 14:02:07.037938 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   
	I0414 14:02:07.037961 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 14:02:07.037972 2231816 main.go:141] libmachine: (no-preload-496809) DBG |     <dhcp>
	I0414 14:02:07.037984 2231816 main.go:141] libmachine: (no-preload-496809) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 14:02:07.037992 2231816 main.go:141] libmachine: (no-preload-496809) DBG |     </dhcp>
	I0414 14:02:07.038000 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   </ip>
	I0414 14:02:07.038011 2231816 main.go:141] libmachine: (no-preload-496809) DBG |   
	I0414 14:02:07.038020 2231816 main.go:141] libmachine: (no-preload-496809) DBG | </network>
	I0414 14:02:07.038029 2231816 main.go:141] libmachine: (no-preload-496809) DBG | 
	I0414 14:02:07.043301 2231816 main.go:141] libmachine: (no-preload-496809) DBG | trying to create private KVM network mk-no-preload-496809 192.168.61.0/24...
	I0414 14:02:07.126806 2231816 main.go:141] libmachine: (no-preload-496809) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809 ...
	I0414 14:02:07.126836 2231816 main.go:141] libmachine: (no-preload-496809) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:02:07.126848 2231816 main.go:141] libmachine: (no-preload-496809) DBG | private KVM network mk-no-preload-496809 192.168.61.0/24 created
	I0414 14:02:07.126864 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.126717 2232495 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:02:07.126972 2231816 main.go:141] libmachine: (no-preload-496809) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:02:07.462229 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.462083 2232495 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa...
	I0414 14:02:07.600357 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.600228 2232495 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/no-preload-496809.rawdisk...
	I0414 14:02:07.600399 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Writing magic tar header
	I0414 14:02:07.600415 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Writing SSH key tar header
	I0414 14:02:07.600439 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:07.600369 2232495 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809 ...
	I0414 14:02:07.600504 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809
	I0414 14:02:07.600530 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:02:07.600561 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809 (perms=drwx------)
	I0414 14:02:07.600585 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:02:07.600597 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:02:07.600612 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:02:07.600620 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:02:07.600627 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:02:07.600639 2231816 main.go:141] libmachine: (no-preload-496809) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:02:07.600650 2231816 main.go:141] libmachine: (no-preload-496809) creating domain...
	I0414 14:02:07.600665 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:02:07.600679 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:02:07.600689 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home/jenkins
	I0414 14:02:07.600697 2231816 main.go:141] libmachine: (no-preload-496809) DBG | checking permissions on dir: /home
	I0414 14:02:07.600701 2231816 main.go:141] libmachine: (no-preload-496809) DBG | skipping /home - not owner
	I0414 14:02:07.602109 2231816 main.go:141] libmachine: (no-preload-496809) define libvirt domain using xml: 
	I0414 14:02:07.602132 2231816 main.go:141] libmachine: (no-preload-496809) <domain type='kvm'>
	I0414 14:02:07.602178 2231816 main.go:141] libmachine: (no-preload-496809)   <name>no-preload-496809</name>
	I0414 14:02:07.602208 2231816 main.go:141] libmachine: (no-preload-496809)   <memory unit='MiB'>2200</memory>
	I0414 14:02:07.602219 2231816 main.go:141] libmachine: (no-preload-496809)   <vcpu>2</vcpu>
	I0414 14:02:07.602232 2231816 main.go:141] libmachine: (no-preload-496809)   <features>
	I0414 14:02:07.602250 2231816 main.go:141] libmachine: (no-preload-496809)     <acpi/>
	I0414 14:02:07.602258 2231816 main.go:141] libmachine: (no-preload-496809)     <apic/>
	I0414 14:02:07.602263 2231816 main.go:141] libmachine: (no-preload-496809)     <pae/>
	I0414 14:02:07.602270 2231816 main.go:141] libmachine: (no-preload-496809)     
	I0414 14:02:07.602275 2231816 main.go:141] libmachine: (no-preload-496809)   </features>
	I0414 14:02:07.602280 2231816 main.go:141] libmachine: (no-preload-496809)   <cpu mode='host-passthrough'>
	I0414 14:02:07.602285 2231816 main.go:141] libmachine: (no-preload-496809)   
	I0414 14:02:07.602294 2231816 main.go:141] libmachine: (no-preload-496809)   </cpu>
	I0414 14:02:07.602303 2231816 main.go:141] libmachine: (no-preload-496809)   <os>
	I0414 14:02:07.602317 2231816 main.go:141] libmachine: (no-preload-496809)     <type>hvm</type>
	I0414 14:02:07.602344 2231816 main.go:141] libmachine: (no-preload-496809)     <boot dev='cdrom'/>
	I0414 14:02:07.602377 2231816 main.go:141] libmachine: (no-preload-496809)     <boot dev='hd'/>
	I0414 14:02:07.602389 2231816 main.go:141] libmachine: (no-preload-496809)     <bootmenu enable='no'/>
	I0414 14:02:07.602393 2231816 main.go:141] libmachine: (no-preload-496809)   </os>
	I0414 14:02:07.602401 2231816 main.go:141] libmachine: (no-preload-496809)   <devices>
	I0414 14:02:07.602406 2231816 main.go:141] libmachine: (no-preload-496809)     <disk type='file' device='cdrom'>
	I0414 14:02:07.602416 2231816 main.go:141] libmachine: (no-preload-496809)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/boot2docker.iso'/>
	I0414 14:02:07.602423 2231816 main.go:141] libmachine: (no-preload-496809)       <target dev='hdc' bus='scsi'/>
	I0414 14:02:07.602428 2231816 main.go:141] libmachine: (no-preload-496809)       <readonly/>
	I0414 14:02:07.602436 2231816 main.go:141] libmachine: (no-preload-496809)     </disk>
	I0414 14:02:07.602464 2231816 main.go:141] libmachine: (no-preload-496809)     <disk type='file' device='disk'>
	I0414 14:02:07.602489 2231816 main.go:141] libmachine: (no-preload-496809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:02:07.602518 2231816 main.go:141] libmachine: (no-preload-496809)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/no-preload-496809.rawdisk'/>
	I0414 14:02:07.602530 2231816 main.go:141] libmachine: (no-preload-496809)       <target dev='hda' bus='virtio'/>
	I0414 14:02:07.602541 2231816 main.go:141] libmachine: (no-preload-496809)     </disk>
	I0414 14:02:07.602551 2231816 main.go:141] libmachine: (no-preload-496809)     <interface type='network'>
	I0414 14:02:07.602560 2231816 main.go:141] libmachine: (no-preload-496809)       <source network='mk-no-preload-496809'/>
	I0414 14:02:07.602573 2231816 main.go:141] libmachine: (no-preload-496809)       <model type='virtio'/>
	I0414 14:02:07.602587 2231816 main.go:141] libmachine: (no-preload-496809)     </interface>
	I0414 14:02:07.602596 2231816 main.go:141] libmachine: (no-preload-496809)     <interface type='network'>
	I0414 14:02:07.602603 2231816 main.go:141] libmachine: (no-preload-496809)       <source network='default'/>
	I0414 14:02:07.602611 2231816 main.go:141] libmachine: (no-preload-496809)       <model type='virtio'/>
	I0414 14:02:07.602620 2231816 main.go:141] libmachine: (no-preload-496809)     </interface>
	I0414 14:02:07.602630 2231816 main.go:141] libmachine: (no-preload-496809)     <serial type='pty'>
	I0414 14:02:07.602640 2231816 main.go:141] libmachine: (no-preload-496809)       <target port='0'/>
	I0414 14:02:07.602661 2231816 main.go:141] libmachine: (no-preload-496809)     </serial>
	I0414 14:02:07.602675 2231816 main.go:141] libmachine: (no-preload-496809)     <console type='pty'>
	I0414 14:02:07.602712 2231816 main.go:141] libmachine: (no-preload-496809)       <target type='serial' port='0'/>
	I0414 14:02:07.602724 2231816 main.go:141] libmachine: (no-preload-496809)     </console>
	I0414 14:02:07.602733 2231816 main.go:141] libmachine: (no-preload-496809)     <rng model='virtio'>
	I0414 14:02:07.602749 2231816 main.go:141] libmachine: (no-preload-496809)       <backend model='random'>/dev/random</backend>
	I0414 14:02:07.602760 2231816 main.go:141] libmachine: (no-preload-496809)     </rng>
	I0414 14:02:07.602769 2231816 main.go:141] libmachine: (no-preload-496809)     
	I0414 14:02:07.602777 2231816 main.go:141] libmachine: (no-preload-496809)     
	I0414 14:02:07.602786 2231816 main.go:141] libmachine: (no-preload-496809)   </devices>
	I0414 14:02:07.602797 2231816 main.go:141] libmachine: (no-preload-496809) </domain>
	I0414 14:02:07.602807 2231816 main.go:141] libmachine: (no-preload-496809) 
	I0414 14:02:07.666626 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:98:a2:c6 in network default
	I0414 14:02:07.667364 2231816 main.go:141] libmachine: (no-preload-496809) starting domain...
	I0414 14:02:07.667396 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:07.667405 2231816 main.go:141] libmachine: (no-preload-496809) ensuring networks are active...
	I0414 14:02:07.668283 2231816 main.go:141] libmachine: (no-preload-496809) Ensuring network default is active
	I0414 14:02:07.668604 2231816 main.go:141] libmachine: (no-preload-496809) Ensuring network mk-no-preload-496809 is active
	I0414 14:02:07.669313 2231816 main.go:141] libmachine: (no-preload-496809) getting domain XML...
	I0414 14:02:07.670192 2231816 main.go:141] libmachine: (no-preload-496809) creating domain...
	I0414 14:02:09.059073 2231816 main.go:141] libmachine: (no-preload-496809) waiting for IP...
	I0414 14:02:09.060067 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:09.060606 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:09.060661 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:09.060615 2232495 retry.go:31] will retry after 233.580601ms: waiting for domain to come up
	I0414 14:02:09.296303 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:09.297007 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:09.297042 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:09.296959 2232495 retry.go:31] will retry after 357.790611ms: waiting for domain to come up
	I0414 14:02:09.656867 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:09.657381 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:09.657408 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:09.657265 2232495 retry.go:31] will retry after 429.420819ms: waiting for domain to come up
	I0414 14:02:10.088988 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:10.089981 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:10.090014 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:10.089944 2232495 retry.go:31] will retry after 536.662568ms: waiting for domain to come up
	I0414 14:02:08.416524 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:08.419481 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:08.419961 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:08.419993 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:08.420212 2231425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:02:08.424377 2231425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:08.437464 2231425 kubeadm.go:883] updating cluster {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:02:08.437589 2231425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:02:08.437646 2231425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:02:08.473351 2231425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:02:08.473422 2231425 ssh_runner.go:195] Run: which lz4
	I0414 14:02:08.478843 2231425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:02:08.483912 2231425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:02:08.483956 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 14:02:10.254792 2231425 crio.go:462] duration metric: took 1.775988843s to copy over tarball
	I0414 14:02:10.254915 2231425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:02:10.629168 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:10.629789 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:10.629815 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:10.629752 2232495 retry.go:31] will retry after 592.890422ms: waiting for domain to come up
	I0414 14:02:11.224877 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:11.225518 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:11.225551 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:11.225446 2232495 retry.go:31] will retry after 661.718152ms: waiting for domain to come up
	I0414 14:02:11.888386 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:11.888979 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:11.889004 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:11.888910 2232495 retry.go:31] will retry after 919.102834ms: waiting for domain to come up
	I0414 14:02:12.809600 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:12.810128 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:12.810166 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:12.810107 2232495 retry.go:31] will retry after 975.798393ms: waiting for domain to come up
	I0414 14:02:13.787266 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:13.787742 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:13.787794 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:13.787705 2232495 retry.go:31] will retry after 1.592624195s: waiting for domain to come up
	I0414 14:02:12.899564 2231425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.644617725s)
	I0414 14:02:12.899592 2231425 crio.go:469] duration metric: took 2.644759434s to extract the tarball
	I0414 14:02:12.899600 2231425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:02:12.945037 2231425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:02:12.991509 2231425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:02:12.991550 2231425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 14:02:12.991629 2231425 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:12.991677 2231425 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:12.991715 2231425 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 14:02:12.991691 2231425 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:12.991713 2231425 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:12.991656 2231425 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:12.991748 2231425 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:12.991744 2231425 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:12.993375 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:12.993406 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:12.993492 2231425 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:12.993701 2231425 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:12.993717 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:12.993744 2231425 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 14:02:12.993776 2231425 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:12.993937 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.132156 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 14:02:13.138207 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.154674 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.198762 2231425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 14:02:13.198816 2231425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 14:02:13.198868 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.205801 2231425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 14:02:13.205852 2231425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.205899 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.229819 2231425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 14:02:13.229862 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.229892 2231425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.229935 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.229952 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.278498 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.278527 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.278498 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.357903 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.357949 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.357949 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.427839 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 14:02:13.427875 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 14:02:13.427952 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.463208 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 14:02:13.664017 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.674692 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.676906 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.679772 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.752348 2231425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 14:02:13.752407 2231425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.752462 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.760839 2231425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 14:02:13.760888 2231425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.760919 2231425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 14:02:13.760956 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.760959 2231425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.761001 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.791878 2231425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 14:02:13.791930 2231425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.791964 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.792008 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.791968 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.792059 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.834942 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.878301 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.878390 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.878390 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.907901 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.958787 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.958832 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.979006 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:14.014817 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 14:02:14.044137 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 14:02:14.044240 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:14.063857 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 14:02:14.096492 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 14:02:15.808205 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:15.950831 2231425 cache_images.go:92] duration metric: took 2.959258613s to LoadCachedImages
	W0414 14:02:15.950962 2231425 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0414 14:02:15.950985 2231425 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I0414 14:02:15.951123 2231425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-954411 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:02:15.951224 2231425 ssh_runner.go:195] Run: crio config
	I0414 14:02:16.007159 2231425 cni.go:84] Creating CNI manager for ""
	I0414 14:02:16.007208 2231425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:02:16.007223 2231425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:02:16.007244 2231425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-954411 NodeName:old-k8s-version-954411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 14:02:16.007393 2231425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-954411"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:02:16.007460 2231425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 14:02:16.017677 2231425 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:02:16.017751 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:02:16.027626 2231425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 14:02:16.048186 2231425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:02:16.068640 2231425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 14:02:16.086038 2231425 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0414 14:02:16.090037 2231425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:16.102492 2231425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:16.236159 2231425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:02:16.254893 2231425 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411 for IP: 192.168.39.90
	I0414 14:02:16.254923 2231425 certs.go:194] generating shared ca certs ...
	I0414 14:02:16.254952 2231425 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.255131 2231425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:02:16.255183 2231425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:02:16.255193 2231425 certs.go:256] generating profile certs ...
	I0414 14:02:16.255263 2231425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key
	I0414 14:02:16.255294 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt with IP's: []
	I0414 14:02:16.523333 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt ...
	I0414 14:02:16.523372 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt: {Name:mke05337cb5defe1d267510b184d8dbaeb2d14c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.523595 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key ...
	I0414 14:02:16.523621 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key: {Name:mk998705f35b2c4f125c6e5ac873c777cbc71e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.523728 2231425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633
	I0414 14:02:16.523745 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.90]
	I0414 14:02:16.652576 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 ...
	I0414 14:02:16.652626 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633: {Name:mk7178d1a073b554cc9d69147a63b0fe7a2e9681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.652875 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633 ...
	I0414 14:02:16.652902 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633: {Name:mk6cb6bb20bed971a3c219e5265c60c0db095156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.653031 2231425 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt
	I0414 14:02:16.653138 2231425 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key
	I0414 14:02:16.653238 2231425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key
	I0414 14:02:16.653263 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt with IP's: []
	I0414 14:02:16.805416 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt ...
	I0414 14:02:16.805448 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt: {Name:mk8720e05c4bd25339fa6d45e4047afa245318bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.805621 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key ...
	I0414 14:02:16.805639 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key: {Name:mkf5b296e3e27886356c877eef73fac5d7e589c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.805806 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:02:16.805842 2231425 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:02:16.805853 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:02:16.805873 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:02:16.805894 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:02:16.805916 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:02:16.805952 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:16.806528 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:02:16.833208 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:02:16.857246 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:02:16.885790 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:02:16.914739 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:02:16.944834 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:02:16.971788 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:02:17.000687 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:02:17.025229 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:02:17.049962 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:02:17.082303 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:02:17.119486 2231425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:02:17.137826 2231425 ssh_runner.go:195] Run: openssl version
	I0414 14:02:17.146664 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:02:17.162054 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.166832 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.166917 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.173395 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:02:17.193067 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:02:17.205436 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.210255 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.210343 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.216410 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:02:17.227772 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:02:17.239128 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.244015 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.244114 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.250024 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:02:17.260827 2231425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:02:17.265133 2231425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:02:17.265204 2231425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:02:17.265336 2231425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:02:17.265421 2231425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:02:17.314385 2231425 cri.go:89] found id: ""
	I0414 14:02:17.314476 2231425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:02:17.325551 2231425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:02:17.336373 2231425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:02:17.346695 2231425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:02:17.346733 2231425 kubeadm.go:157] found existing configuration files:
	
	I0414 14:02:17.346782 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:02:17.356406 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:02:17.356497 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:02:17.366580 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:02:17.379831 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:02:17.379909 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:02:17.393525 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:02:17.405892 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:02:17.405965 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:02:17.418473 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:02:17.428225 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:02:17.428299 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:02:17.438056 2231425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:02:17.568005 2231425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:02:17.568155 2231425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:02:17.732274 2231425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:02:17.732474 2231425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:02:17.732633 2231425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:02:17.928163 2231425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:02:15.381866 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:15.382467 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:15.382498 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:15.382417 2232495 retry.go:31] will retry after 2.203770873s: waiting for domain to come up
	I0414 14:02:17.588202 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:17.588756 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:17.588788 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:17.588704 2232495 retry.go:31] will retry after 1.750807221s: waiting for domain to come up
	I0414 14:02:19.341215 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:19.341750 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:19.341788 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:19.341664 2232495 retry.go:31] will retry after 3.402458256s: waiting for domain to come up
	I0414 14:02:17.929957 2231425 out.go:235]   - Generating certificates and keys ...
	I0414 14:02:17.930090 2231425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:02:17.930223 2231425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:02:18.165073 2231425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:02:18.550030 2231425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:02:18.819652 2231425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:02:19.419932 2231425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:02:19.494572 2231425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:02:19.494786 2231425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	I0414 14:02:19.569868 2231425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:02:19.570107 2231425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	I0414 14:02:19.718840 2231425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:02:19.849363 2231425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:02:20.116069 2231425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:02:20.116359 2231425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:02:20.319491 2231425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:02:20.521436 2231425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:02:20.651559 2231425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:02:20.783494 2231425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:02:20.802277 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:02:20.803259 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:02:20.803313 2231425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:02:20.934146 2231425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:02:20.936033 2231425 out.go:235]   - Booting up control plane ...
	I0414 14:02:20.936150 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:02:20.943781 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:02:20.946201 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:02:20.947125 2231425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:02:20.952024 2231425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:02:22.746097 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:22.746635 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:22.746670 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:22.746582 2232495 retry.go:31] will retry after 3.680418058s: waiting for domain to come up
	I0414 14:02:26.430614 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:26.431088 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find current IP address of domain no-preload-496809 in network mk-no-preload-496809
	I0414 14:02:26.431111 2231816 main.go:141] libmachine: (no-preload-496809) DBG | I0414 14:02:26.431053 2232495 retry.go:31] will retry after 5.298167354s: waiting for domain to come up
	I0414 14:02:33.357852 2232297 start.go:364] duration metric: took 38.544713109s to acquireMachinesLock for "embed-certs-242761"
	I0414 14:02:33.357943 2232297 start.go:93] Provisioning new machine with config: &{Name:embed-certs-242761 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:embed-certs-242761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:02:33.358076 2232297 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:02:33.360248 2232297 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:02:33.360486 2232297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:02:33.360564 2232297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:02:33.381356 2232297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0414 14:02:33.381810 2232297 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:02:33.382355 2232297 main.go:141] libmachine: Using API Version  1
	I0414 14:02:33.382387 2232297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:02:33.382768 2232297 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:02:33.382986 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetMachineName
	I0414 14:02:33.383119 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:02:33.383313 2232297 start.go:159] libmachine.API.Create for "embed-certs-242761" (driver="kvm2")
	I0414 14:02:33.383346 2232297 client.go:168] LocalClient.Create starting
	I0414 14:02:33.383381 2232297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:02:33.383419 2232297 main.go:141] libmachine: Decoding PEM data...
	I0414 14:02:33.383449 2232297 main.go:141] libmachine: Parsing certificate...
	I0414 14:02:33.383531 2232297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:02:33.383569 2232297 main.go:141] libmachine: Decoding PEM data...
	I0414 14:02:33.383586 2232297 main.go:141] libmachine: Parsing certificate...
	I0414 14:02:33.383612 2232297 main.go:141] libmachine: Running pre-create checks...
	I0414 14:02:33.383624 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .PreCreateCheck
	I0414 14:02:33.383998 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetConfigRaw
	I0414 14:02:33.384476 2232297 main.go:141] libmachine: Creating machine...
	I0414 14:02:33.384494 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .Create
	I0414 14:02:33.384626 2232297 main.go:141] libmachine: (embed-certs-242761) creating KVM machine...
	I0414 14:02:33.384651 2232297 main.go:141] libmachine: (embed-certs-242761) creating network...
	I0414 14:02:33.386178 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found existing default KVM network
	I0414 14:02:33.387341 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.387183 2232740 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:97:78} reservation:<nil>}
	I0414 14:02:33.388080 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.387985 2232740 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a7:07:73} reservation:<nil>}
	I0414 14:02:33.389026 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.388945 2232740 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:cc:55} reservation:<nil>}
	I0414 14:02:33.390259 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.390188 2232740 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000293330}
	I0414 14:02:33.390321 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | created network xml: 
	I0414 14:02:33.390340 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | <network>
	I0414 14:02:33.390354 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   <name>mk-embed-certs-242761</name>
	I0414 14:02:33.390366 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   <dns enable='no'/>
	I0414 14:02:33.390383 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   
	I0414 14:02:33.390406 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0414 14:02:33.390418 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |     <dhcp>
	I0414 14:02:33.390428 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0414 14:02:33.390436 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |     </dhcp>
	I0414 14:02:33.390443 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   </ip>
	I0414 14:02:33.390450 2232297 main.go:141] libmachine: (embed-certs-242761) DBG |   
	I0414 14:02:33.390457 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | </network>
	I0414 14:02:33.390466 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | 
	I0414 14:02:33.396128 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | trying to create private KVM network mk-embed-certs-242761 192.168.72.0/24...
	I0414 14:02:33.474097 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | private KVM network mk-embed-certs-242761 192.168.72.0/24 created
	I0414 14:02:33.474144 2232297 main.go:141] libmachine: (embed-certs-242761) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761 ...
	I0414 14:02:33.474158 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.474076 2232740 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:02:33.474177 2232297 main.go:141] libmachine: (embed-certs-242761) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:02:33.474276 2232297 main.go:141] libmachine: (embed-certs-242761) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:02:33.759475 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.759318 2232740 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa...
	I0414 14:02:33.886828 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.886651 2232740 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/embed-certs-242761.rawdisk...
	I0414 14:02:33.886893 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | Writing magic tar header
	I0414 14:02:33.886913 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | Writing SSH key tar header
	I0414 14:02:33.886926 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:33.886833 2232740 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761 ...
	I0414 14:02:33.887006 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761
	I0414 14:02:33.887025 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761 (perms=drwx------)
	I0414 14:02:33.887036 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:02:33.887050 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:02:33.887060 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:02:33.887092 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:02:33.887112 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:02:33.887123 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home/jenkins
	I0414 14:02:33.887135 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | checking permissions on dir: /home
	I0414 14:02:33.887146 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | skipping /home - not owner
	I0414 14:02:33.887163 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:02:33.887176 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:02:33.887188 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:02:33.887206 2232297 main.go:141] libmachine: (embed-certs-242761) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:02:33.887220 2232297 main.go:141] libmachine: (embed-certs-242761) creating domain...
	I0414 14:02:33.888604 2232297 main.go:141] libmachine: (embed-certs-242761) define libvirt domain using xml: 
	I0414 14:02:33.888637 2232297 main.go:141] libmachine: (embed-certs-242761) <domain type='kvm'>
	I0414 14:02:33.888647 2232297 main.go:141] libmachine: (embed-certs-242761)   <name>embed-certs-242761</name>
	I0414 14:02:33.888653 2232297 main.go:141] libmachine: (embed-certs-242761)   <memory unit='MiB'>2200</memory>
	I0414 14:02:33.888661 2232297 main.go:141] libmachine: (embed-certs-242761)   <vcpu>2</vcpu>
	I0414 14:02:33.888675 2232297 main.go:141] libmachine: (embed-certs-242761)   <features>
	I0414 14:02:33.888688 2232297 main.go:141] libmachine: (embed-certs-242761)     <acpi/>
	I0414 14:02:33.888694 2232297 main.go:141] libmachine: (embed-certs-242761)     <apic/>
	I0414 14:02:33.888702 2232297 main.go:141] libmachine: (embed-certs-242761)     <pae/>
	I0414 14:02:33.888709 2232297 main.go:141] libmachine: (embed-certs-242761)     
	I0414 14:02:33.888752 2232297 main.go:141] libmachine: (embed-certs-242761)   </features>
	I0414 14:02:33.888767 2232297 main.go:141] libmachine: (embed-certs-242761)   <cpu mode='host-passthrough'>
	I0414 14:02:33.888779 2232297 main.go:141] libmachine: (embed-certs-242761)   
	I0414 14:02:33.888785 2232297 main.go:141] libmachine: (embed-certs-242761)   </cpu>
	I0414 14:02:33.888793 2232297 main.go:141] libmachine: (embed-certs-242761)   <os>
	I0414 14:02:33.888804 2232297 main.go:141] libmachine: (embed-certs-242761)     <type>hvm</type>
	I0414 14:02:33.888814 2232297 main.go:141] libmachine: (embed-certs-242761)     <boot dev='cdrom'/>
	I0414 14:02:33.888823 2232297 main.go:141] libmachine: (embed-certs-242761)     <boot dev='hd'/>
	I0414 14:02:33.888831 2232297 main.go:141] libmachine: (embed-certs-242761)     <bootmenu enable='no'/>
	I0414 14:02:33.888840 2232297 main.go:141] libmachine: (embed-certs-242761)   </os>
	I0414 14:02:33.888854 2232297 main.go:141] libmachine: (embed-certs-242761)   <devices>
	I0414 14:02:33.888866 2232297 main.go:141] libmachine: (embed-certs-242761)     <disk type='file' device='cdrom'>
	I0414 14:02:33.888879 2232297 main.go:141] libmachine: (embed-certs-242761)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/boot2docker.iso'/>
	I0414 14:02:33.888901 2232297 main.go:141] libmachine: (embed-certs-242761)       <target dev='hdc' bus='scsi'/>
	I0414 14:02:33.888914 2232297 main.go:141] libmachine: (embed-certs-242761)       <readonly/>
	I0414 14:02:33.888924 2232297 main.go:141] libmachine: (embed-certs-242761)     </disk>
	I0414 14:02:33.888934 2232297 main.go:141] libmachine: (embed-certs-242761)     <disk type='file' device='disk'>
	I0414 14:02:33.888945 2232297 main.go:141] libmachine: (embed-certs-242761)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:02:33.888959 2232297 main.go:141] libmachine: (embed-certs-242761)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/embed-certs-242761.rawdisk'/>
	I0414 14:02:33.888978 2232297 main.go:141] libmachine: (embed-certs-242761)       <target dev='hda' bus='virtio'/>
	I0414 14:02:33.888989 2232297 main.go:141] libmachine: (embed-certs-242761)     </disk>
	I0414 14:02:33.888996 2232297 main.go:141] libmachine: (embed-certs-242761)     <interface type='network'>
	I0414 14:02:33.889006 2232297 main.go:141] libmachine: (embed-certs-242761)       <source network='mk-embed-certs-242761'/>
	I0414 14:02:33.889016 2232297 main.go:141] libmachine: (embed-certs-242761)       <model type='virtio'/>
	I0414 14:02:33.889029 2232297 main.go:141] libmachine: (embed-certs-242761)     </interface>
	I0414 14:02:33.889040 2232297 main.go:141] libmachine: (embed-certs-242761)     <interface type='network'>
	I0414 14:02:33.889052 2232297 main.go:141] libmachine: (embed-certs-242761)       <source network='default'/>
	I0414 14:02:33.889059 2232297 main.go:141] libmachine: (embed-certs-242761)       <model type='virtio'/>
	I0414 14:02:33.889069 2232297 main.go:141] libmachine: (embed-certs-242761)     </interface>
	I0414 14:02:33.889079 2232297 main.go:141] libmachine: (embed-certs-242761)     <serial type='pty'>
	I0414 14:02:33.889088 2232297 main.go:141] libmachine: (embed-certs-242761)       <target port='0'/>
	I0414 14:02:33.889097 2232297 main.go:141] libmachine: (embed-certs-242761)     </serial>
	I0414 14:02:33.889115 2232297 main.go:141] libmachine: (embed-certs-242761)     <console type='pty'>
	I0414 14:02:33.889127 2232297 main.go:141] libmachine: (embed-certs-242761)       <target type='serial' port='0'/>
	I0414 14:02:33.889138 2232297 main.go:141] libmachine: (embed-certs-242761)     </console>
	I0414 14:02:33.889149 2232297 main.go:141] libmachine: (embed-certs-242761)     <rng model='virtio'>
	I0414 14:02:33.889160 2232297 main.go:141] libmachine: (embed-certs-242761)       <backend model='random'>/dev/random</backend>
	I0414 14:02:33.889169 2232297 main.go:141] libmachine: (embed-certs-242761)     </rng>
	I0414 14:02:33.889178 2232297 main.go:141] libmachine: (embed-certs-242761)     
	I0414 14:02:33.889187 2232297 main.go:141] libmachine: (embed-certs-242761)     
	I0414 14:02:33.889195 2232297 main.go:141] libmachine: (embed-certs-242761)   </devices>
	I0414 14:02:33.889201 2232297 main.go:141] libmachine: (embed-certs-242761) </domain>
	I0414 14:02:33.889253 2232297 main.go:141] libmachine: (embed-certs-242761) 
	I0414 14:02:33.897203 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:27:51:e6 in network default
	I0414 14:02:33.898154 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:33.898182 2232297 main.go:141] libmachine: (embed-certs-242761) starting domain...
	I0414 14:02:33.898203 2232297 main.go:141] libmachine: (embed-certs-242761) ensuring networks are active...
	I0414 14:02:33.899155 2232297 main.go:141] libmachine: (embed-certs-242761) Ensuring network default is active
	I0414 14:02:33.899722 2232297 main.go:141] libmachine: (embed-certs-242761) Ensuring network mk-embed-certs-242761 is active
	I0414 14:02:33.900799 2232297 main.go:141] libmachine: (embed-certs-242761) getting domain XML...
	I0414 14:02:33.901877 2232297 main.go:141] libmachine: (embed-certs-242761) creating domain...
	I0414 14:02:31.734962 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.735491 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has current primary IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.735511 2231816 main.go:141] libmachine: (no-preload-496809) found domain IP: 192.168.61.8
	I0414 14:02:31.735519 2231816 main.go:141] libmachine: (no-preload-496809) reserving static IP address...
	I0414 14:02:31.735868 2231816 main.go:141] libmachine: (no-preload-496809) DBG | unable to find host DHCP lease matching {name: "no-preload-496809", mac: "52:54:00:24:6f:af", ip: "192.168.61.8"} in network mk-no-preload-496809
	I0414 14:02:31.816178 2231816 main.go:141] libmachine: (no-preload-496809) reserved static IP address 192.168.61.8 for domain no-preload-496809
	I0414 14:02:31.816207 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Getting to WaitForSSH function...
	I0414 14:02:31.816215 2231816 main.go:141] libmachine: (no-preload-496809) waiting for SSH...
	I0414 14:02:31.818929 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.819302 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:31.819327 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.819515 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Using SSH client type: external
	I0414 14:02:31.819547 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa (-rw-------)
	I0414 14:02:31.819591 2231816 main.go:141] libmachine: (no-preload-496809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:02:31.819608 2231816 main.go:141] libmachine: (no-preload-496809) DBG | About to run SSH command:
	I0414 14:02:31.819624 2231816 main.go:141] libmachine: (no-preload-496809) DBG | exit 0
	I0414 14:02:31.945038 2231816 main.go:141] libmachine: (no-preload-496809) DBG | SSH cmd err, output: <nil>: 
	I0414 14:02:31.945312 2231816 main.go:141] libmachine: (no-preload-496809) KVM machine creation complete
	I0414 14:02:31.945662 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetConfigRaw
	I0414 14:02:31.946288 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:31.946462 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:31.946664 2231816 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:02:31.946684 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetState
	I0414 14:02:31.948060 2231816 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:02:31.948073 2231816 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:02:31.948078 2231816 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:02:31.948083 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:31.950809 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.951238 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:31.951270 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:31.951460 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:31.951634 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:31.951766 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:31.951890 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:31.952008 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:31.952229 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:31.952239 2231816 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:02:32.056208 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:32.056236 2231816 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:02:32.056247 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.059302 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.059762 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.059794 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.060018 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:32.060199 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.060372 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.060567 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:32.060790 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:32.061074 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:32.061094 2231816 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:02:32.169755 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:02:32.169859 2231816 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:02:32.169882 2231816 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:02:32.169897 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetMachineName
	I0414 14:02:32.170188 2231816 buildroot.go:166] provisioning hostname "no-preload-496809"
	I0414 14:02:32.170215 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetMachineName
	I0414 14:02:32.170453 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.173608 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.174000 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.174032 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.174193 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:32.174401 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.174579 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.174743 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:32.174909 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:32.175115 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:32.175130 2231816 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-496809 && echo "no-preload-496809" | sudo tee /etc/hostname
	I0414 14:02:32.295218 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-496809
	
	I0414 14:02:32.295253 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.298515 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.298917 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.298965 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.299178 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:32.299343 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.299461 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.299545 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:32.299718 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:32.299928 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:32.299946 2231816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-496809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-496809/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-496809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:02:32.413735 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:32.413786 2231816 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:02:32.413846 2231816 buildroot.go:174] setting up certificates
	I0414 14:02:32.413862 2231816 provision.go:84] configureAuth start
	I0414 14:02:32.413879 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetMachineName
	I0414 14:02:32.414181 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetIP
	I0414 14:02:32.416994 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.417393 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.417425 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.417524 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.419983 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.420279 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.420310 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.420434 2231816 provision.go:143] copyHostCerts
	I0414 14:02:32.420499 2231816 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:02:32.420517 2231816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:02:32.420579 2231816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:02:32.420701 2231816 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:02:32.420711 2231816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:02:32.420752 2231816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:02:32.420826 2231816 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:02:32.420836 2231816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:02:32.420855 2231816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:02:32.420910 2231816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.no-preload-496809 san=[127.0.0.1 192.168.61.8 localhost minikube no-preload-496809]
	I0414 14:02:32.689342 2231816 provision.go:177] copyRemoteCerts
	I0414 14:02:32.689407 2231816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:02:32.689437 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.692698 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.693130 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.693162 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.693341 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:32.693574 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.693766 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:32.693938 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:02:32.779810 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 14:02:32.804818 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:02:32.828387 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:02:32.852355 2231816 provision.go:87] duration metric: took 438.473982ms to configureAuth
	I0414 14:02:32.852392 2231816 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:02:32.852558 2231816 config.go:182] Loaded profile config "no-preload-496809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:02:32.852641 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:32.855686 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.856042 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:32.856073 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:32.856216 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:32.856420 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.856570 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:32.856743 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:32.856918 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:32.857157 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:32.857176 2231816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:02:33.108764 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:02:33.108802 2231816 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:02:33.108813 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetURL
	I0414 14:02:33.110013 2231816 main.go:141] libmachine: (no-preload-496809) DBG | using libvirt version 6000000
	I0414 14:02:33.112829 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.113233 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.113263 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.113465 2231816 main.go:141] libmachine: Docker is up and running!
	I0414 14:02:33.113476 2231816 main.go:141] libmachine: Reticulating splines...
	I0414 14:02:33.113484 2231816 client.go:171] duration metric: took 26.080919262s to LocalClient.Create
	I0414 14:02:33.113510 2231816 start.go:167] duration metric: took 26.080982847s to libmachine.API.Create "no-preload-496809"
	I0414 14:02:33.113522 2231816 start.go:293] postStartSetup for "no-preload-496809" (driver="kvm2")
	I0414 14:02:33.113532 2231816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:02:33.113550 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:33.113837 2231816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:02:33.113880 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:33.116314 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.116667 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.116698 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.116850 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:33.117036 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:33.117256 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:33.117436 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:02:33.199747 2231816 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:02:33.204435 2231816 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:02:33.204463 2231816 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:02:33.204538 2231816 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:02:33.204619 2231816 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:02:33.204717 2231816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:02:33.214875 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:33.242204 2231816 start.go:296] duration metric: took 128.651787ms for postStartSetup
	I0414 14:02:33.242262 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetConfigRaw
	I0414 14:02:33.242876 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetIP
	I0414 14:02:33.245715 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.246142 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.246168 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.246424 2231816 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/config.json ...
	I0414 14:02:33.246607 2231816 start.go:128] duration metric: took 26.236032327s to createHost
	I0414 14:02:33.246630 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:33.249275 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.249610 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.249647 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.249790 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:33.250000 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:33.250181 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:33.250324 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:33.250441 2231816 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:33.250711 2231816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0414 14:02:33.250725 2231816 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:02:33.357671 2231816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639353.313787122
	
	I0414 14:02:33.357693 2231816 fix.go:216] guest clock: 1744639353.313787122
	I0414 14:02:33.357700 2231816 fix.go:229] Guest: 2025-04-14 14:02:33.313787122 +0000 UTC Remote: 2025-04-14 14:02:33.246617294 +0000 UTC m=+58.105502999 (delta=67.169828ms)
	I0414 14:02:33.357721 2231816 fix.go:200] guest clock delta is within tolerance: 67.169828ms
	I0414 14:02:33.357725 2231816 start.go:83] releasing machines lock for "no-preload-496809", held for 26.347326954s
	I0414 14:02:33.357748 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:33.358060 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetIP
	I0414 14:02:33.361340 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.361733 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.361766 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.361957 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:33.362433 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:33.362597 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:02:33.362690 2231816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:02:33.362738 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:33.362863 2231816 ssh_runner.go:195] Run: cat /version.json
	I0414 14:02:33.362890 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:02:33.365442 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.365835 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.365862 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.365881 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.366016 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:33.366203 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:33.366349 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:33.366363 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:33.366376 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:33.366524 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:02:33.366544 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:02:33.366703 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:02:33.366828 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:02:33.366964 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:02:33.450842 2231816 ssh_runner.go:195] Run: systemctl --version
	I0414 14:02:33.485343 2231816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:02:33.650284 2231816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:02:33.657234 2231816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:02:33.657308 2231816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:02:33.674489 2231816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:02:33.674523 2231816 start.go:495] detecting cgroup driver to use...
	I0414 14:02:33.674606 2231816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:02:33.691568 2231816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:02:33.706080 2231816 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:02:33.706150 2231816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:02:33.720030 2231816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:02:33.733977 2231816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:02:33.856683 2231816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:02:34.003561 2231816 docker.go:233] disabling docker service ...
	I0414 14:02:34.003642 2231816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:02:34.020460 2231816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:02:34.034561 2231816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:02:34.186359 2231816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:02:34.298396 2231816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:02:34.313478 2231816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:02:34.332149 2231816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:02:34.332225 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.342827 2231816 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:02:34.342900 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.353644 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.364782 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.375539 2231816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:02:34.386299 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.396615 2231816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.414174 2231816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:34.424669 2231816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:02:34.437461 2231816 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:02:34.437541 2231816 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:02:34.455449 2231816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:02:34.466623 2231816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:34.589793 2231816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:02:34.683769 2231816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:02:34.683852 2231816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:02:34.689213 2231816 start.go:563] Will wait 60s for crictl version
	I0414 14:02:34.689267 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:34.693952 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:02:34.739481 2231816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:02:34.739575 2231816 ssh_runner.go:195] Run: crio --version
	I0414 14:02:34.771731 2231816 ssh_runner.go:195] Run: crio --version
	I0414 14:02:34.814873 2231816 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:02:34.816055 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetIP
	I0414 14:02:34.819878 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:34.820440 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:02:34.820480 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:02:34.820699 2231816 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 14:02:34.826471 2231816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:34.844367 2231816 kubeadm.go:883] updating cluster {Name:no-preload-496809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-4
96809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.8 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:02:34.844499 2231816 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:02:34.844548 2231816 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:02:34.888841 2231816 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:02:34.888878 2231816 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.2 registry.k8s.io/kube-controller-manager:v1.32.2 registry.k8s.io/kube-scheduler:v1.32.2 registry.k8s.io/kube-proxy:v1.32.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 14:02:34.888959 2231816 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:34.889024 2231816 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:34.889052 2231816 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:34.889055 2231816 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:34.889024 2231816 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:34.889249 2231816 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0414 14:02:34.889316 2231816 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:34.889316 2231816 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:34.890416 2231816 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:34.890494 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:34.890416 2231816 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0414 14:02:34.890501 2231816 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:34.890946 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:34.891009 2231816 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:34.890946 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:34.891048 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.012993 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:35.013764 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:35.030164 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:35.031079 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:35.031902 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.035189 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0414 14:02:35.065364 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:35.117188 2231816 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.2" does not exist at hash "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389" in container runtime
	I0414 14:02:35.117239 2231816 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:35.117297 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.178341 2231816 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0414 14:02:35.178419 2231816 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:35.178500 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.242051 2232297 main.go:141] libmachine: (embed-certs-242761) waiting for IP...
	I0414 14:02:35.243104 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:35.243727 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:35.243809 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:35.243720 2232740 retry.go:31] will retry after 239.341691ms: waiting for domain to come up
	I0414 14:02:35.485526 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:35.486197 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:35.486226 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:35.486163 2232740 retry.go:31] will retry after 324.447397ms: waiting for domain to come up
	I0414 14:02:35.812746 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:35.813263 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:35.813288 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:35.813241 2232740 retry.go:31] will retry after 483.687383ms: waiting for domain to come up
	I0414 14:02:36.298996 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:36.299897 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:36.299933 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:36.299772 2232740 retry.go:31] will retry after 497.680842ms: waiting for domain to come up
	I0414 14:02:36.799636 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:36.800374 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:36.800407 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:36.800342 2232740 retry.go:31] will retry after 572.649429ms: waiting for domain to come up
	I0414 14:02:37.375353 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:37.375822 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:37.375862 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:37.375809 2232740 retry.go:31] will retry after 873.742439ms: waiting for domain to come up
	I0414 14:02:38.251079 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:38.251590 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:38.251613 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:38.251573 2232740 retry.go:31] will retry after 933.155736ms: waiting for domain to come up
	I0414 14:02:39.186119 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:39.186704 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:39.186737 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:39.186668 2232740 retry.go:31] will retry after 1.268078196s: waiting for domain to come up
	I0414 14:02:35.221508 2231816 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.2" does not exist at hash "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef" in container runtime
	I0414 14:02:35.221611 2231816 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:35.221522 2231816 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0414 14:02:35.221670 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.221679 2231816 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0414 14:02:35.221702 2231816 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:35.221573 2231816 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.2" does not exist at hash "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d" in container runtime
	I0414 14:02:35.221740 2231816 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0414 14:02:35.221752 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.221757 2231816 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.221786 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.221803 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.242519 2231816 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.2" needs transfer: "registry.k8s.io/kube-proxy:v1.32.2" does not exist at hash "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5" in container runtime
	I0414 14:02:35.242571 2231816 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:35.242635 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:35.242638 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:35.242719 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:35.242768 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:35.242811 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0414 14:02:35.242838 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.242903 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:35.374416 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:35.374501 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:35.397111 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:35.397235 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0414 14:02:35.397235 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.397306 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:35.397291 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:35.516838 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:35.516907 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:02:35.546887 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:02:35.546887 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:02:35.600163 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0414 14:02:35.600218 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0414 14:02:35.600259 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:02:35.657784 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:02:35.657799 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0414 14:02:35.657941 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0414 14:02:35.696747 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0414 14:02:35.696803 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0414 14:02:35.696888 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0414 14:02:35.696895 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0414 14:02:35.745927 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0414 14:02:35.745942 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0414 14:02:35.746054 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0414 14:02:35.746061 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0414 14:02:35.746054 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0414 14:02:35.746200 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0414 14:02:35.769633 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0414 14:02:35.769684 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0414 14:02:35.769716 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0414 14:02:35.769752 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2
	I0414 14:02:35.769753 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.32.2': No such file or directory
	I0414 14:02:35.769770 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.32.2': No such file or directory
	I0414 14:02:35.769782 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 --> /var/lib/minikube/images/kube-apiserver_v1.32.2 (28680704 bytes)
	I0414 14:02:35.769798 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0414 14:02:35.769799 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 --> /var/lib/minikube/images/kube-scheduler_v1.32.2 (20667904 bytes)
	I0414 14:02:35.769815 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0414 14:02:35.769836 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.16-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.16-0': No such file or directory
	I0414 14:02:35.769850 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 --> /var/lib/minikube/images/etcd_3.5.16-0 (57690112 bytes)
	I0414 14:02:35.769868 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.32.2': No such file or directory
	I0414 14:02:35.769886 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 --> /var/lib/minikube/images/kube-controller-manager_v1.32.2 (26269696 bytes)
	I0414 14:02:35.810820 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.32.2': No such file or directory
	I0414 14:02:35.810885 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 --> /var/lib/minikube/images/kube-proxy_v1.32.2 (30910464 bytes)
	I0414 14:02:35.892874 2231816 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0414 14:02:35.892959 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0414 14:02:36.606584 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0414 14:02:36.606645 2231816 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0414 14:02:36.606724 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0414 14:02:37.675996 2231816 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:38.700380 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.093622165s)
	I0414 14:02:38.700420 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0414 14:02:38.700446 2231816 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0414 14:02:38.700501 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0414 14:02:38.700510 2231816 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.024460095s)
	I0414 14:02:38.700567 2231816 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0414 14:02:38.700607 2231816 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:38.700656 2231816 ssh_runner.go:195] Run: which crictl
	I0414 14:02:38.705480 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:40.456162 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:40.456748 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:40.456774 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:40.456701 2232740 retry.go:31] will retry after 1.194215968s: waiting for domain to come up
	I0414 14:02:41.653150 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:41.653638 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:41.653667 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:41.653599 2232740 retry.go:31] will retry after 1.620328516s: waiting for domain to come up
	I0414 14:02:43.276484 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:43.277067 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:43.277101 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:43.277043 2232740 retry.go:31] will retry after 2.865720531s: waiting for domain to come up
	I0414 14:02:40.889727 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.2: (2.189189418s)
	I0414 14:02:40.889759 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 from cache
	I0414 14:02:40.889787 2231816 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0414 14:02:40.889846 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0414 14:02:40.889846 2231816 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.184298775s)
	I0414 14:02:40.889916 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:40.941596 2231816 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:43.270810 2231816 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.329168351s)
	I0414 14:02:43.270877 2231816 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0414 14:02:43.270949 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.2: (2.381076221s)
	I0414 14:02:43.270967 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 from cache
	I0414 14:02:43.270992 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0414 14:02:43.271010 2231816 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0414 14:02:43.271101 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0414 14:02:43.277313 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0414 14:02:43.277353 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0414 14:02:46.146694 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:46.147310 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:46.147336 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:46.147273 2232740 retry.go:31] will retry after 3.340471882s: waiting for domain to come up
	I0414 14:02:49.489061 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:49.489617 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:49.489641 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:49.489578 2232740 retry.go:31] will retry after 3.834221551s: waiting for domain to come up
	I0414 14:02:45.307707 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.2: (2.036571113s)
	I0414 14:02:45.307743 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 from cache
	I0414 14:02:45.307776 2231816 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.2
	I0414 14:02:45.307829 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.2
	I0414 14:02:47.699386 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.2: (2.391527043s)
	I0414 14:02:47.699416 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 from cache
	I0414 14:02:47.699466 2231816 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0414 14:02:47.699539 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0414 14:02:53.325884 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:53.326333 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find current IP address of domain embed-certs-242761 in network mk-embed-certs-242761
	I0414 14:02:53.326363 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | I0414 14:02:53.326318 2232740 retry.go:31] will retry after 4.946715362s: waiting for domain to come up
	I0414 14:02:51.417685 2231816 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.718107754s)
	I0414 14:02:51.417724 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0414 14:02:51.417760 2231816 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0414 14:02:51.417804 2231816 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0414 14:02:52.366441 2231816 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0414 14:02:52.366501 2231816 cache_images.go:123] Successfully loaded all cached images
	I0414 14:02:52.366511 2231816 cache_images.go:92] duration metric: took 17.477613441s to LoadCachedImages
	I0414 14:02:52.366528 2231816 kubeadm.go:934] updating node { 192.168.61.8 8443 v1.32.2 crio true true} ...
	I0414 14:02:52.366665 2231816 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-496809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-496809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:02:52.366738 2231816 ssh_runner.go:195] Run: crio config
	I0414 14:02:52.420123 2231816 cni.go:84] Creating CNI manager for ""
	I0414 14:02:52.420149 2231816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:02:52.420160 2231816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:02:52.420181 2231816 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.8 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-496809 NodeName:no-preload-496809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:02:52.420295 2231816 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-496809"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:02:52.420361 2231816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:02:52.431341 2231816 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:02:52.431413 2231816 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:02:52.442073 2231816 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:02:52.442093 2231816 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0414 14:02:52.442145 2231816 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0414 14:02:52.442170 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:02:52.442196 2231816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:02:52.442170 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0414 14:02:52.448437 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0414 14:02:52.448475 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0414 14:02:52.483204 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:02:52.483247 2231816 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:02:52.483255 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:02:52.506926 2231816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:02:52.506985 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:02:53.262218 2231816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:02:53.277422 2231816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:02:53.303443 2231816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:02:53.321719 2231816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0414 14:02:53.339351 2231816 ssh_runner.go:195] Run: grep 192.168.61.8	control-plane.minikube.internal$ /etc/hosts
	I0414 14:02:53.343501 2231816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:53.356473 2231816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:53.474257 2231816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:02:53.493053 2231816 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809 for IP: 192.168.61.8
	I0414 14:02:53.493078 2231816 certs.go:194] generating shared ca certs ...
	I0414 14:02:53.493097 2231816 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:53.493296 2231816 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:02:53.493358 2231816 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:02:53.493373 2231816 certs.go:256] generating profile certs ...
	I0414 14:02:53.493453 2231816 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.key
	I0414 14:02:53.493470 2231816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt with IP's: []
	I0414 14:02:53.657653 2231816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt ...
	I0414 14:02:53.657683 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: {Name:mk64d43a3ef67247913cbbb37d02d23512aaf7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:53.657899 2231816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.key ...
	I0414 14:02:53.657916 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.key: {Name:mk8462f21db62715b00eefa56eb8b1103ff8ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:53.658032 2231816 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key.8a8a1b26
	I0414 14:02:53.658056 2231816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt.8a8a1b26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.8]
	I0414 14:02:53.971348 2231816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt.8a8a1b26 ...
	I0414 14:02:53.971382 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt.8a8a1b26: {Name:mkbb6925c460396d18c55e986f80e7ed5266d3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:53.971575 2231816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key.8a8a1b26 ...
	I0414 14:02:53.971590 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key.8a8a1b26: {Name:mkdc3f3b635fb20a97e7f5b6a5ccbea596b87f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:53.971662 2231816 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt.8a8a1b26 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt
	I0414 14:02:53.971741 2231816 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key.8a8a1b26 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key
	I0414 14:02:53.971796 2231816 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.key
	I0414 14:02:53.971810 2231816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.crt with IP's: []
	I0414 14:02:54.353901 2231816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.crt ...
	I0414 14:02:54.353936 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.crt: {Name:mkf006dc31598bebc9715551340c7f49f36b5ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:54.354117 2231816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.key ...
	I0414 14:02:54.354132 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.key: {Name:mkd238a46d07e6cc0224b3ea40376daff081e355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:54.354303 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:02:54.354340 2231816 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:02:54.354350 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:02:54.354374 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:02:54.354400 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:02:54.354421 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:02:54.354457 2231816 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:54.355119 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:02:54.381825 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:02:54.405787 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:02:54.429104 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:02:54.453874 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:02:54.478586 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:02:54.503043 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:02:54.527794 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:02:54.552848 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:02:54.576306 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:02:54.600724 2231816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:02:54.624199 2231816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:02:54.641408 2231816 ssh_runner.go:195] Run: openssl version
	I0414 14:02:54.647638 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:02:54.660569 2231816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:54.665467 2231816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:54.665517 2231816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:54.671612 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:02:54.683184 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:02:54.696537 2231816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:02:54.701611 2231816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:02:54.701689 2231816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:02:54.707789 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:02:54.719293 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:02:54.732318 2231816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:02:54.737460 2231816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:02:54.737510 2231816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:02:54.743654 2231816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:02:54.756754 2231816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:02:54.761494 2231816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:02:54.761556 2231816 kubeadm.go:392] StartCluster: {Name:no-preload-496809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-4968
09 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.8 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:02:54.761641 2231816 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:02:54.761699 2231816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:02:54.805082 2231816 cri.go:89] found id: ""
	I0414 14:02:54.805202 2231816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:02:54.817093 2231816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:02:54.827450 2231816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:02:54.839898 2231816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:02:54.839917 2231816 kubeadm.go:157] found existing configuration files:
	
	I0414 14:02:54.839974 2231816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:02:54.849959 2231816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:02:54.850032 2231816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:02:54.861249 2231816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:02:54.871698 2231816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:02:54.871754 2231816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:02:54.882773 2231816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:02:54.893560 2231816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:02:54.893623 2231816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:02:54.912001 2231816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:02:54.922258 2231816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:02:54.922339 2231816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:02:54.932268 2231816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:02:54.991711 2231816 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:02:54.991807 2231816 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:02:55.087613 2231816 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:02:55.087787 2231816 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:02:55.087905 2231816 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:02:55.109978 2231816 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:02:55.111700 2231816 out.go:235]   - Generating certificates and keys ...
	I0414 14:02:55.111809 2231816 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:02:55.111893 2231816 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:02:55.345531 2231816 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:02:55.793190 2231816 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:02:55.890436 2231816 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:02:56.015553 2231816 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:02:56.119083 2231816 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:02:56.119283 2231816 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-496809] and IPs [192.168.61.8 127.0.0.1 ::1]
	I0414 14:02:56.235987 2231816 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:02:56.236112 2231816 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-496809] and IPs [192.168.61.8 127.0.0.1 ::1]
	I0414 14:02:56.427811 2231816 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:02:56.565445 2231816 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:02:56.755592 2231816 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:02:56.755687 2231816 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:02:56.883314 2231816 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:02:57.011824 2231816 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:02:57.341611 2231816 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:02:57.462792 2231816 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:02:57.745723 2231816 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:02:57.746299 2231816 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:02:57.748902 2231816 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:02:58.278851 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.279681 2232297 main.go:141] libmachine: (embed-certs-242761) found domain IP: 192.168.72.14
	I0414 14:02:58.279708 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has current primary IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.279713 2232297 main.go:141] libmachine: (embed-certs-242761) reserving static IP address...
	I0414 14:02:58.280260 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | unable to find host DHCP lease matching {name: "embed-certs-242761", mac: "52:54:00:7d:e7:58", ip: "192.168.72.14"} in network mk-embed-certs-242761
	I0414 14:02:58.365770 2232297 main.go:141] libmachine: (embed-certs-242761) reserved static IP address 192.168.72.14 for domain embed-certs-242761
	I0414 14:02:58.365803 2232297 main.go:141] libmachine: (embed-certs-242761) waiting for SSH...
	I0414 14:02:58.365813 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | Getting to WaitForSSH function...
	I0414 14:02:58.369374 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.369821 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:58.369855 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.369955 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | Using SSH client type: external
	I0414 14:02:58.369989 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa (-rw-------)
	I0414 14:02:58.370052 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:02:58.370074 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | About to run SSH command:
	I0414 14:02:58.370087 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | exit 0
	I0414 14:02:58.501032 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | SSH cmd err, output: <nil>: 
	I0414 14:02:58.501338 2232297 main.go:141] libmachine: (embed-certs-242761) KVM machine creation complete
	I0414 14:02:58.501653 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetConfigRaw
	I0414 14:02:58.502331 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:02:58.502531 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:02:58.502693 2232297 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:02:58.502708 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetState
	I0414 14:02:58.504163 2232297 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:02:58.504175 2232297 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:02:58.504180 2232297 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:02:58.504185 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:58.506738 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.507177 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:58.507215 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.507347 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:58.507526 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.507710 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.507823 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:58.507991 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:58.508212 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:58.508222 2232297 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:02:58.620499 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:58.620531 2232297 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:02:58.620549 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:58.623828 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.624381 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:58.624416 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.624704 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:58.624986 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.625213 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.625382 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:58.625606 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:58.625927 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:58.625943 2232297 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:02:58.749321 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:02:58.749420 2232297 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:02:58.749435 2232297 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:02:58.749449 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetMachineName
	I0414 14:02:58.749813 2232297 buildroot.go:166] provisioning hostname "embed-certs-242761"
	I0414 14:02:58.749864 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetMachineName
	I0414 14:02:58.750051 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:58.753451 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.754008 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:58.754040 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.754295 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:58.754511 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.754719 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.754917 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:58.755122 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:58.755405 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:58.755425 2232297 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-242761 && echo "embed-certs-242761" | sudo tee /etc/hostname
	I0414 14:02:58.894123 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-242761
	
	I0414 14:02:58.894173 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:58.897562 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.897980 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:58.898026 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:58.898221 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:58.898445 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.898601 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:58.898752 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:58.898894 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:58.899130 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:58.899158 2232297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-242761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-242761/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-242761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:02:59.031364 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:59.031402 2232297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:02:59.031445 2232297 buildroot.go:174] setting up certificates
	I0414 14:02:59.031458 2232297 provision.go:84] configureAuth start
	I0414 14:02:59.031469 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetMachineName
	I0414 14:02:59.031799 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetIP
	I0414 14:02:59.035308 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.035730 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.035760 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.035986 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:59.038691 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.039195 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.039228 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.039380 2232297 provision.go:143] copyHostCerts
	I0414 14:02:59.039453 2232297 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:02:59.039474 2232297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:02:59.039548 2232297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:02:59.039659 2232297 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:02:59.039671 2232297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:02:59.039700 2232297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:02:59.039779 2232297 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:02:59.039790 2232297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:02:59.039817 2232297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:02:59.039888 2232297 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.embed-certs-242761 san=[127.0.0.1 192.168.72.14 embed-certs-242761 localhost minikube]
	I0414 14:02:59.348244 2232297 provision.go:177] copyRemoteCerts
	I0414 14:02:59.348315 2232297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:02:59.348347 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:59.351354 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.351801 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.351834 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.352011 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:59.352194 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.352376 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:59.352554 2232297 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa Username:docker}
	I0414 14:02:59.444489 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:02:59.471764 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 14:02:59.497224 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:02:59.524310 2232297 provision.go:87] duration metric: took 492.835696ms to configureAuth
	I0414 14:02:59.524345 2232297 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:02:59.524577 2232297 config.go:182] Loaded profile config "embed-certs-242761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:02:59.524685 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:59.527923 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.528352 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.528381 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.528622 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:59.528891 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.529040 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.529230 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:59.529392 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:59.529584 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:59.529612 2232297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:03:00.074111 2232414 start.go:364] duration metric: took 57.655567323s to acquireMachinesLock for "kubernetes-upgrade-461086"
	I0414 14:03:00.074181 2232414 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:03:00.074190 2232414 fix.go:54] fixHost starting: 
	I0414 14:03:00.074661 2232414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:00.074735 2232414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:00.095965 2232414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0414 14:03:00.096423 2232414 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:00.096980 2232414 main.go:141] libmachine: Using API Version  1
	I0414 14:03:00.097008 2232414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:00.097464 2232414 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:00.097717 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:00.097945 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetState
	I0414 14:03:00.099934 2232414 fix.go:112] recreateIfNeeded on kubernetes-upgrade-461086: state=Running err=<nil>
	W0414 14:03:00.099960 2232414 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:03:00.101689 2232414 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-461086" VM ...
	I0414 14:02:57.946411 2231816 out.go:235]   - Booting up control plane ...
	I0414 14:02:57.946551 2231816 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:02:57.946679 2231816 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:02:57.946815 2231816 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:02:57.947070 2231816 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:02:57.947202 2231816 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:02:57.947256 2231816 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:02:57.947398 2231816 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:02:57.947545 2231816 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:02:58.423029 2231816 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.584608ms
	I0414 14:02:58.423202 2231816 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:02:59.796318 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:02:59.796354 2232297 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:02:59.796366 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetURL
	I0414 14:02:59.797843 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | using libvirt version 6000000
	I0414 14:02:59.800349 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.800720 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.800791 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.800982 2232297 main.go:141] libmachine: Docker is up and running!
	I0414 14:02:59.801002 2232297 main.go:141] libmachine: Reticulating splines...
	I0414 14:02:59.801008 2232297 client.go:171] duration metric: took 26.417652762s to LocalClient.Create
	I0414 14:02:59.801031 2232297 start.go:167] duration metric: took 26.417719848s to libmachine.API.Create "embed-certs-242761"
	I0414 14:02:59.801043 2232297 start.go:293] postStartSetup for "embed-certs-242761" (driver="kvm2")
	I0414 14:02:59.801055 2232297 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:02:59.801087 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:02:59.801367 2232297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:02:59.801400 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:59.804007 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.804363 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.804390 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.804549 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:59.804751 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.804927 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:59.805078 2232297 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa Username:docker}
	I0414 14:02:59.900180 2232297 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:02:59.905083 2232297 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:02:59.905129 2232297 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:02:59.905202 2232297 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:02:59.905300 2232297 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:02:59.905416 2232297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:02:59.916623 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:59.944424 2232297 start.go:296] duration metric: took 143.362204ms for postStartSetup
	I0414 14:02:59.944488 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetConfigRaw
	I0414 14:02:59.945172 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetIP
	I0414 14:02:59.948127 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.948461 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.948504 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.948773 2232297 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/config.json ...
	I0414 14:02:59.949020 2232297 start.go:128] duration metric: took 26.590928207s to createHost
	I0414 14:02:59.949052 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:02:59.951333 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.951809 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:02:59.951841 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:02:59.952010 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:02:59.952207 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.952409 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:02:59.952562 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:02:59.952747 2232297 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:59.952986 2232297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0414 14:02:59.952999 2232297 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:03:00.073902 2232297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639380.048086343
	
	I0414 14:03:00.073931 2232297 fix.go:216] guest clock: 1744639380.048086343
	I0414 14:03:00.073942 2232297 fix.go:229] Guest: 2025-04-14 14:03:00.048086343 +0000 UTC Remote: 2025-04-14 14:02:59.949036604 +0000 UTC m=+65.280709801 (delta=99.049739ms)
	I0414 14:03:00.073969 2232297 fix.go:200] guest clock delta is within tolerance: 99.049739ms
	I0414 14:03:00.073975 2232297 start.go:83] releasing machines lock for "embed-certs-242761", held for 26.716089003s
	I0414 14:03:00.074010 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:03:00.074381 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetIP
	I0414 14:03:00.077761 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.078182 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:03:00.078211 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.078570 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:03:00.079150 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:03:00.079389 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .DriverName
	I0414 14:03:00.079522 2232297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:03:00.079581 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:03:00.079658 2232297 ssh_runner.go:195] Run: cat /version.json
	I0414 14:03:00.079685 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHHostname
	I0414 14:03:00.082611 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.082908 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.083119 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:03:00.083149 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.083392 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:03:00.083407 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:03:00.083419 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:00.083567 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:03:00.083735 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHPort
	I0414 14:03:00.083758 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:03:00.083918 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHKeyPath
	I0414 14:03:00.083919 2232297 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa Username:docker}
	I0414 14:03:00.084060 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetSSHUsername
	I0414 14:03:00.084220 2232297 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/embed-certs-242761/id_rsa Username:docker}
	I0414 14:03:00.188681 2232297 ssh_runner.go:195] Run: systemctl --version
	I0414 14:03:00.195539 2232297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:03:00.370601 2232297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:03:00.378573 2232297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:03:00.378660 2232297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:03:00.401774 2232297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:03:00.401805 2232297 start.go:495] detecting cgroup driver to use...
	I0414 14:03:00.401891 2232297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:03:00.425170 2232297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:03:00.444006 2232297 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:03:00.444130 2232297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:03:00.465922 2232297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:03:00.485279 2232297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:03:00.645516 2232297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:03:00.815986 2232297 docker.go:233] disabling docker service ...
	I0414 14:03:00.816065 2232297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:03:00.830539 2232297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:03:00.849061 2232297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:03:00.979403 2232297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:03:01.127650 2232297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:03:01.144262 2232297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:03:01.163550 2232297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:03:01.163610 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.173918 2232297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:03:01.173986 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.184561 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.195405 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.210008 2232297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:03:01.227527 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.241808 2232297 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.262307 2232297 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:01.273487 2232297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:03:01.285800 2232297 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:03:01.285892 2232297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:03:01.302722 2232297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:03:01.313878 2232297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:03:01.435603 2232297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:03:01.539901 2232297 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:03:01.540022 2232297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:03:01.545144 2232297 start.go:563] Will wait 60s for crictl version
	I0414 14:03:01.545210 2232297 ssh_runner.go:195] Run: which crictl
	I0414 14:03:01.549614 2232297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:03:01.594682 2232297 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:03:01.594798 2232297 ssh_runner.go:195] Run: crio --version
	I0414 14:03:01.625157 2232297 ssh_runner.go:195] Run: crio --version
	I0414 14:03:01.655881 2232297 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:03:00.950657 2231425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:03:00.950813 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:00.951057 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:00.102989 2232414 machine.go:93] provisionDockerMachine start ...
	I0414 14:03:00.103021 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:00.103286 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:00.106466 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.107240 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.107311 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.107386 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:00.107684 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.107865 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.107967 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:00.108169 2232414 main.go:141] libmachine: Using SSH client type: native
	I0414 14:03:00.108474 2232414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:03:00.108502 2232414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:03:00.227520 2232414 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-461086
	
	I0414 14:03:00.227559 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:03:00.227874 2232414 buildroot.go:166] provisioning hostname "kubernetes-upgrade-461086"
	I0414 14:03:00.227915 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:03:00.228166 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:00.232178 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.232783 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.232822 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.233024 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:00.233302 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.233516 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.233708 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:00.233937 2232414 main.go:141] libmachine: Using SSH client type: native
	I0414 14:03:00.234292 2232414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:03:00.234310 2232414 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-461086 && echo "kubernetes-upgrade-461086" | sudo tee /etc/hostname
	I0414 14:03:00.378937 2232414 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-461086
	
	I0414 14:03:00.378993 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:00.382219 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.382676 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.382700 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.382912 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:00.383124 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.383327 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.383489 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:00.383674 2232414 main.go:141] libmachine: Using SSH client type: native
	I0414 14:03:00.383965 2232414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:03:00.383990 2232414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-461086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-461086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-461086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:03:00.507411 2232414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:03:00.507509 2232414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:03:00.507559 2232414 buildroot.go:174] setting up certificates
	I0414 14:03:00.507574 2232414 provision.go:84] configureAuth start
	I0414 14:03:00.507599 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:03:00.507902 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:03:00.511151 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.511500 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.511544 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.511710 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:00.514462 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.514870 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.514909 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.515055 2232414 provision.go:143] copyHostCerts
	I0414 14:03:00.515117 2232414 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:03:00.515127 2232414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:03:00.515192 2232414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:03:00.515288 2232414 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:03:00.515293 2232414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:03:00.515311 2232414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:03:00.515375 2232414 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:03:00.515378 2232414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:03:00.515394 2232414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:03:00.515450 2232414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-461086 san=[127.0.0.1 192.168.50.41 kubernetes-upgrade-461086 localhost minikube]
	I0414 14:03:00.863015 2232414 provision.go:177] copyRemoteCerts
	I0414 14:03:00.863106 2232414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:03:00.863143 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:00.866316 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.866676 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:00.866717 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:00.866929 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:00.867232 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:00.867420 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:00.867598 2232414 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:03:00.960261 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:03:01.002930 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:03:01.041059 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:03:01.075534 2232414 provision.go:87] duration metric: took 567.943998ms to configureAuth
	I0414 14:03:01.075574 2232414 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:03:01.075807 2232414 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:03:01.075918 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:01.078895 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:01.079388 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:01.079422 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:01.079586 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:01.079893 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:01.080075 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:01.080282 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:01.080498 2232414 main.go:141] libmachine: Using SSH client type: native
	I0414 14:03:01.080838 2232414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:03:01.080869 2232414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:03:03.916604 2231816 kubeadm.go:310] [api-check] The API server is healthy after 5.501433495s
	I0414 14:03:03.930098 2231816 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:03:03.952099 2231816 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:03:03.996060 2231816 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:03:03.996348 2231816 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-496809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:03:04.016151 2231816 kubeadm.go:310] [bootstrap-token] Using token: bfqwax.pq3j5dolxvyubkxl
	I0414 14:03:01.656931 2232297 main.go:141] libmachine: (embed-certs-242761) Calling .GetIP
	I0414 14:03:01.660077 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:01.660569 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:e7:58", ip: ""} in network mk-embed-certs-242761: {Iface:virbr3 ExpiryTime:2025-04-14 15:02:49 +0000 UTC Type:0 Mac:52:54:00:7d:e7:58 Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:embed-certs-242761 Clientid:01:52:54:00:7d:e7:58}
	I0414 14:03:01.660600 2232297 main.go:141] libmachine: (embed-certs-242761) DBG | domain embed-certs-242761 has defined IP address 192.168.72.14 and MAC address 52:54:00:7d:e7:58 in network mk-embed-certs-242761
	I0414 14:03:01.660858 2232297 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 14:03:01.665132 2232297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:03:01.678690 2232297 kubeadm.go:883] updating cluster {Name:embed-certs-242761 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-
242761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:03:01.678803 2232297 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:03:01.678852 2232297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:03:01.713647 2232297 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:03:01.713736 2232297 ssh_runner.go:195] Run: which lz4
	I0414 14:03:01.718221 2232297 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:03:01.723994 2232297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:03:01.724033 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:03:03.230714 2232297 crio.go:462] duration metric: took 1.512531954s to copy over tarball
	I0414 14:03:03.230808 2232297 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:03:04.017665 2231816 out.go:235]   - Configuring RBAC rules ...
	I0414 14:03:04.017863 2231816 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:03:04.031550 2231816 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:03:04.045455 2231816 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:03:04.049575 2231816 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:03:04.053451 2231816 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:03:04.057472 2231816 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:03:04.321494 2231816 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:03:04.751663 2231816 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:03:05.324040 2231816 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:03:05.325188 2231816 kubeadm.go:310] 
	I0414 14:03:05.325307 2231816 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:03:05.325330 2231816 kubeadm.go:310] 
	I0414 14:03:05.325429 2231816 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:03:05.325439 2231816 kubeadm.go:310] 
	I0414 14:03:05.325491 2231816 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:03:05.325589 2231816 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:03:05.325676 2231816 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:03:05.325687 2231816 kubeadm.go:310] 
	I0414 14:03:05.325762 2231816 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:03:05.325773 2231816 kubeadm.go:310] 
	I0414 14:03:05.325837 2231816 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:03:05.325851 2231816 kubeadm.go:310] 
	I0414 14:03:05.325965 2231816 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:03:05.326091 2231816 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:03:05.326193 2231816 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:03:05.326204 2231816 kubeadm.go:310] 
	I0414 14:03:05.326350 2231816 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:03:05.326476 2231816 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:03:05.326489 2231816 kubeadm.go:310] 
	I0414 14:03:05.326598 2231816 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bfqwax.pq3j5dolxvyubkxl \
	I0414 14:03:05.326730 2231816 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:03:05.326767 2231816 kubeadm.go:310] 	--control-plane 
	I0414 14:03:05.326776 2231816 kubeadm.go:310] 
	I0414 14:03:05.326874 2231816 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:03:05.326883 2231816 kubeadm.go:310] 
	I0414 14:03:05.326984 2231816 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bfqwax.pq3j5dolxvyubkxl \
	I0414 14:03:05.327156 2231816 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:03:05.328235 2231816 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:03:05.328273 2231816 cni.go:84] Creating CNI manager for ""
	I0414 14:03:05.328284 2231816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:03:05.330449 2231816 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 14:03:05.951793 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:05.952023 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:07.220770 2232414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:03:07.220808 2232414 machine.go:96] duration metric: took 7.117795943s to provisionDockerMachine
	I0414 14:03:07.220823 2232414 start.go:293] postStartSetup for "kubernetes-upgrade-461086" (driver="kvm2")
	I0414 14:03:07.220837 2232414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:03:07.220866 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:07.221259 2232414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:03:07.221312 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:07.224758 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.225224 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:07.225258 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.225575 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:07.225753 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:07.225905 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:07.226032 2232414 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:03:05.543476 2232297 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.31262974s)
	I0414 14:03:05.543512 2232297 crio.go:469] duration metric: took 2.312760496s to extract the tarball
	I0414 14:03:05.543524 2232297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:03:05.585535 2232297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:03:05.638410 2232297 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:03:05.638444 2232297 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:03:05.638455 2232297 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.32.2 crio true true} ...
	I0414 14:03:05.638607 2232297 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-242761 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-242761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:03:05.638679 2232297 ssh_runner.go:195] Run: crio config
	I0414 14:03:05.692291 2232297 cni.go:84] Creating CNI manager for ""
	I0414 14:03:05.692318 2232297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:03:05.692332 2232297 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:03:05.692354 2232297 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-242761 NodeName:embed-certs-242761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:03:05.692495 2232297 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-242761"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.14"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:03:05.692603 2232297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:03:05.703583 2232297 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:03:05.703677 2232297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:03:05.716128 2232297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0414 14:03:05.734732 2232297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:03:05.752287 2232297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0414 14:03:05.768939 2232297 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0414 14:03:05.773016 2232297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:03:05.785519 2232297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:03:05.904489 2232297 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:03:05.923328 2232297 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761 for IP: 192.168.72.14
	I0414 14:03:05.923361 2232297 certs.go:194] generating shared ca certs ...
	I0414 14:03:05.923387 2232297 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:05.923622 2232297 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:03:05.923690 2232297 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:03:05.923717 2232297 certs.go:256] generating profile certs ...
	I0414 14:03:05.923814 2232297 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.key
	I0414 14:03:05.923849 2232297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.crt with IP's: []
	I0414 14:03:06.713071 2232297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.crt ...
	I0414 14:03:06.713114 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.crt: {Name:mk312782e91f79723db0699a981eeb3397d18379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:06.713286 2232297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.key ...
	I0414 14:03:06.713300 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/client.key: {Name:mkc7373082f55e64ba8c04b9a481131a378d9950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:06.713381 2232297 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key.ec8db9b0
	I0414 14:03:06.713398 2232297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt.ec8db9b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.14]
	I0414 14:03:07.112853 2232297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt.ec8db9b0 ...
	I0414 14:03:07.112897 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt.ec8db9b0: {Name:mk8715362ae7ea78daa5dadcfa98bc90cec81f43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:07.113138 2232297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key.ec8db9b0 ...
	I0414 14:03:07.113158 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key.ec8db9b0: {Name:mk5392caca6f13f7ab1c7d4e7079460e0fc0e33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:07.113271 2232297 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt.ec8db9b0 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt
	I0414 14:03:07.113376 2232297 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key.ec8db9b0 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key
	I0414 14:03:07.113474 2232297 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.key
	I0414 14:03:07.113494 2232297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.crt with IP's: []
	I0414 14:03:07.241469 2232297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.crt ...
	I0414 14:03:07.241497 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.crt: {Name:mk60be6bb0ce59b59caddec2ecfb7e9c7ca7d589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:07.241651 2232297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.key ...
	I0414 14:03:07.241664 2232297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.key: {Name:mkc241c5e073959cdeeefce55033244236c239f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:07.241862 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:03:07.241917 2232297 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:03:07.241931 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:03:07.241957 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:03:07.241980 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:03:07.242012 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:03:07.242070 2232297 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:03:07.242671 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:03:07.275055 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:03:07.302755 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:03:07.333219 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:03:07.362808 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 14:03:07.395713 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:03:07.501826 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:03:07.544611 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/embed-certs-242761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:03:07.586615 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:03:07.616914 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:03:07.645276 2232297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:03:07.670824 2232297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:03:07.691108 2232297 ssh_runner.go:195] Run: openssl version
	I0414 14:03:07.697460 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:03:07.710885 2232297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:07.715674 2232297 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:07.715756 2232297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:07.721998 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:03:07.733360 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:03:07.745225 2232297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:03:07.750004 2232297 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:03:07.750068 2232297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:03:07.756041 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:03:07.769113 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:03:07.780516 2232297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:03:07.785784 2232297 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:03:07.785849 2232297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:03:07.791686 2232297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:03:07.804020 2232297 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:03:07.809052 2232297 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:03:07.809115 2232297 kubeadm.go:392] StartCluster: {Name:embed-certs-242761 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-242
761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:03:07.809224 2232297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:03:07.809290 2232297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:03:07.862125 2232297 cri.go:89] found id: ""
	I0414 14:03:07.862223 2232297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:03:07.876852 2232297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:03:07.887161 2232297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:03:07.899352 2232297 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:03:07.899380 2232297 kubeadm.go:157] found existing configuration files:
	
	I0414 14:03:07.899428 2232297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:03:07.909535 2232297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:03:07.909602 2232297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:03:07.919871 2232297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:03:07.930133 2232297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:03:07.930213 2232297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:03:07.941011 2232297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:03:07.954591 2232297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:03:07.954669 2232297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:03:07.968385 2232297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:03:07.981243 2232297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:03:07.981335 2232297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:03:07.994752 2232297 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:03:08.159159 2232297 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:03:08.159313 2232297 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:03:08.298614 2232297 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:03:08.298852 2232297 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:03:08.298997 2232297 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:03:08.312313 2232297 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:03:08.390958 2232297 out.go:235]   - Generating certificates and keys ...
	I0414 14:03:08.391085 2232297 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:03:08.391226 2232297 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:03:08.502674 2232297 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:03:08.721679 2232297 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:03:08.838432 2232297 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:03:09.003748 2232297 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:03:09.143441 2232297 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:03:09.143667 2232297 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-242761 localhost] and IPs [192.168.72.14 127.0.0.1 ::1]
	I0414 14:03:09.490205 2232297 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:03:09.490396 2232297 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-242761 localhost] and IPs [192.168.72.14 127.0.0.1 ::1]
	I0414 14:03:09.608137 2232297 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:03:05.331652 2231816 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:03:05.343693 2231816 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:03:05.367373 2231816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:03:05.367486 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:05.367492 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-496809 minikube.k8s.io/updated_at=2025_04_14T14_03_05_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=no-preload-496809 minikube.k8s.io/primary=true
	I0414 14:03:05.403808 2231816 ops.go:34] apiserver oom_adj: -16
	I0414 14:03:05.548994 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:06.049984 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:06.550001 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:07.049821 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:07.549959 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:08.049998 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:08.549856 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:09.050013 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:09.549698 2231816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:03:10.306947 2231816 kubeadm.go:1113] duration metric: took 4.939538884s to wait for elevateKubeSystemPrivileges
	I0414 14:03:10.307000 2231816 kubeadm.go:394] duration metric: took 15.545449044s to StartCluster
	I0414 14:03:10.307028 2231816 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:10.307126 2231816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:03:10.308536 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:10.308882 2231816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:03:10.308889 2231816 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.8 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:03:10.308981 2231816 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:03:10.309110 2231816 addons.go:69] Setting storage-provisioner=true in profile "no-preload-496809"
	I0414 14:03:10.309122 2231816 addons.go:69] Setting default-storageclass=true in profile "no-preload-496809"
	I0414 14:03:10.309136 2231816 addons.go:238] Setting addon storage-provisioner=true in "no-preload-496809"
	I0414 14:03:10.309170 2231816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-496809"
	I0414 14:03:10.309176 2231816 host.go:66] Checking if "no-preload-496809" exists ...
	I0414 14:03:10.309124 2231816 config.go:182] Loaded profile config "no-preload-496809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:03:10.309622 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:10.309667 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:10.309699 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:10.309749 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:10.310242 2231816 out.go:177] * Verifying Kubernetes components...
	I0414 14:03:10.311482 2231816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:03:10.329973 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0414 14:03:10.330213 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0414 14:03:10.330757 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:10.330908 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:10.331327 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:03:10.331354 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:10.331462 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:03:10.331487 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:10.331782 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:10.331835 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:10.331999 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetState
	I0414 14:03:10.332517 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:10.332568 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:10.335874 2231816 addons.go:238] Setting addon default-storageclass=true in "no-preload-496809"
	I0414 14:03:10.335922 2231816 host.go:66] Checking if "no-preload-496809" exists ...
	I0414 14:03:10.336302 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:10.336347 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:10.355154 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0414 14:03:10.356332 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0414 14:03:10.356506 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:10.356659 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:10.357242 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:03:10.357264 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:10.357405 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:03:10.357417 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:10.357941 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:10.358116 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetState
	I0414 14:03:10.358158 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:10.358793 2231816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:03:10.358835 2231816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:03:10.361195 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:03:10.362992 2231816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:03:09.761858 2232297 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:03:09.879630 2232297 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:03:09.880109 2232297 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:03:10.151109 2232297 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:03:10.242950 2232297 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:03:10.364378 2232297 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:03:10.557929 2232297 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:03:10.716232 2232297 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:03:10.717151 2232297 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:03:10.719633 2232297 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:03:07.313425 2232414 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:03:07.318314 2232414 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:03:07.318347 2232414 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:03:07.318409 2232414 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:03:07.318487 2232414 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:03:07.318592 2232414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:03:07.332588 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:03:07.364083 2232414 start.go:296] duration metric: took 143.240901ms for postStartSetup
	I0414 14:03:07.364136 2232414 fix.go:56] duration metric: took 7.289946835s for fixHost
	I0414 14:03:07.364165 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:07.367534 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.367865 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:07.367926 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.368038 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:07.368253 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:07.368437 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:07.368626 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:07.368843 2232414 main.go:141] libmachine: Using SSH client type: native
	I0414 14:03:07.369122 2232414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:03:07.369138 2232414 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:03:07.490369 2232414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639387.485231705
	
	I0414 14:03:07.490405 2232414 fix.go:216] guest clock: 1744639387.485231705
	I0414 14:03:07.490416 2232414 fix.go:229] Guest: 2025-04-14 14:03:07.485231705 +0000 UTC Remote: 2025-04-14 14:03:07.36414241 +0000 UTC m=+65.102976997 (delta=121.089295ms)
	I0414 14:03:07.490500 2232414 fix.go:200] guest clock delta is within tolerance: 121.089295ms
	I0414 14:03:07.490508 2232414 start.go:83] releasing machines lock for "kubernetes-upgrade-461086", held for 7.416355533s
	I0414 14:03:07.490543 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:07.490834 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:03:07.494699 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.495147 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:07.495178 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.495359 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:07.495930 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:07.496147 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:03:07.496253 2232414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:03:07.496298 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:07.496645 2232414 ssh_runner.go:195] Run: cat /version.json
	I0414 14:03:07.496673 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:03:07.500150 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.500490 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.500540 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:07.500566 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.500986 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:07.501068 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:07.501106 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:07.501284 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:07.501310 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:03:07.501516 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:07.501518 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:03:07.501663 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:03:07.501731 2232414 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:03:07.501857 2232414 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:03:07.614004 2232414 ssh_runner.go:195] Run: systemctl --version
	I0414 14:03:07.623052 2232414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:03:07.788237 2232414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:03:07.795691 2232414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:03:07.795758 2232414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:03:07.807701 2232414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 14:03:07.807737 2232414 start.go:495] detecting cgroup driver to use...
	I0414 14:03:07.807813 2232414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:03:07.831122 2232414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:03:07.847540 2232414 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:03:07.847611 2232414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:03:07.866715 2232414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:03:07.884167 2232414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:03:08.073432 2232414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:03:08.312316 2232414 docker.go:233] disabling docker service ...
	I0414 14:03:08.312398 2232414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:03:08.336965 2232414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:03:08.353540 2232414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:03:08.517252 2232414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:03:08.709144 2232414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:03:08.724650 2232414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:03:08.756985 2232414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:03:08.757076 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:08.783601 2232414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:03:08.783695 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:08.900289 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:09.033255 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:09.202942 2232414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:03:09.254592 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:09.271473 2232414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:09.291865 2232414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:03:09.325522 2232414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:03:09.357753 2232414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:03:09.397675 2232414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:03:09.600854 2232414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:03:10.853025 2232414 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.252121148s)
	I0414 14:03:10.853064 2232414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:03:10.853126 2232414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:03:10.861437 2232414 start.go:563] Will wait 60s for crictl version
	I0414 14:03:10.861512 2232414 ssh_runner.go:195] Run: which crictl
	I0414 14:03:10.865789 2232414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:03:10.918268 2232414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:03:10.918359 2232414 ssh_runner.go:195] Run: crio --version
	I0414 14:03:10.975551 2232414 ssh_runner.go:195] Run: crio --version
	I0414 14:03:11.013329 2232414 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:03:11.014475 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:03:11.017477 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:11.017905 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:03:11.017942 2232414 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:03:11.018178 2232414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 14:03:11.024258 2232414 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:03:11.024398 2232414 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:03:11.024466 2232414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:03:11.080608 2232414 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:03:11.080637 2232414 crio.go:433] Images already preloaded, skipping extraction
	I0414 14:03:11.080697 2232414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:03:11.126405 2232414 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:03:11.126435 2232414 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:03:11.126448 2232414 kubeadm.go:934] updating node { 192.168.50.41 8443 v1.32.2 crio true true} ...
	I0414 14:03:11.126598 2232414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-461086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:03:11.126690 2232414 ssh_runner.go:195] Run: crio config
	I0414 14:03:11.183833 2232414 cni.go:84] Creating CNI manager for ""
	I0414 14:03:11.183876 2232414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:03:11.183891 2232414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:03:11.183921 2232414 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.41 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-461086 NodeName:kubernetes-upgrade-461086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:03:11.184139 2232414 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-461086"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:03:11.184231 2232414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:03:11.202823 2232414 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:03:11.202935 2232414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:03:11.324083 2232414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0414 14:03:11.563553 2232414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:03:11.756493 2232414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0414 14:03:11.872942 2232414 ssh_runner.go:195] Run: grep 192.168.50.41	control-plane.minikube.internal$ /etc/hosts
	I0414 14:03:11.908702 2232414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:03:10.364220 2231816 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:03:10.364251 2231816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:03:10.364272 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:03:10.367645 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:03:10.368122 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:03:10.368147 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:03:10.368291 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:03:10.368470 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:03:10.368598 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:03:10.368749 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:03:10.379870 2231816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0414 14:03:10.380329 2231816 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:03:10.380926 2231816 main.go:141] libmachine: Using API Version  1
	I0414 14:03:10.380954 2231816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:03:10.381506 2231816 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:03:10.381746 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetState
	I0414 14:03:10.385414 2231816 main.go:141] libmachine: (no-preload-496809) Calling .DriverName
	I0414 14:03:10.385652 2231816 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:03:10.385675 2231816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:03:10.385698 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHHostname
	I0414 14:03:10.388843 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:03:10.389247 2231816 main.go:141] libmachine: (no-preload-496809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:6f:af", ip: ""} in network mk-no-preload-496809: {Iface:virbr2 ExpiryTime:2025-04-14 15:02:23 +0000 UTC Type:0 Mac:52:54:00:24:6f:af Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:no-preload-496809 Clientid:01:52:54:00:24:6f:af}
	I0414 14:03:10.389274 2231816 main.go:141] libmachine: (no-preload-496809) DBG | domain no-preload-496809 has defined IP address 192.168.61.8 and MAC address 52:54:00:24:6f:af in network mk-no-preload-496809
	I0414 14:03:10.389538 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHPort
	I0414 14:03:10.389706 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHKeyPath
	I0414 14:03:10.389860 2231816 main.go:141] libmachine: (no-preload-496809) Calling .GetSSHUsername
	I0414 14:03:10.390014 2231816 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/no-preload-496809/id_rsa Username:docker}
	I0414 14:03:10.661007 2231816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:03:10.661048 2231816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:03:10.868529 2231816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:03:10.868821 2231816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:03:11.575478 2231816 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 14:03:11.576631 2231816 node_ready.go:35] waiting up to 6m0s for node "no-preload-496809" to be "Ready" ...
	I0414 14:03:11.592863 2231816 node_ready.go:49] node "no-preload-496809" has status "Ready":"True"
	I0414 14:03:11.593050 2231816 node_ready.go:38] duration metric: took 16.381229ms for node "no-preload-496809" to be "Ready" ...
	I0414 14:03:11.593082 2231816 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:03:11.608439 2231816 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2nj9h" in "kube-system" namespace to be "Ready" ...
	I0414 14:03:12.079846 2231816 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-496809" context rescaled to 1 replicas
	I0414 14:03:12.378214 2231816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.50964149s)
	I0414 14:03:12.378512 2231816 main.go:141] libmachine: Making call to close driver server
	I0414 14:03:12.378597 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Close
	I0414 14:03:12.378427 2231816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.50957778s)
	I0414 14:03:12.378754 2231816 main.go:141] libmachine: Making call to close driver server
	I0414 14:03:12.378792 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Close
	I0414 14:03:12.379112 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Closing plugin on server side
	I0414 14:03:12.379115 2231816 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:03:12.379197 2231816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:03:12.379220 2231816 main.go:141] libmachine: Making call to close driver server
	I0414 14:03:12.379285 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Close
	I0414 14:03:12.381078 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Closing plugin on server side
	I0414 14:03:12.381119 2231816 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:03:12.381136 2231816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:03:12.381430 2231816 main.go:141] libmachine: (no-preload-496809) DBG | Closing plugin on server side
	I0414 14:03:12.381470 2231816 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:03:12.381478 2231816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:03:12.381486 2231816 main.go:141] libmachine: Making call to close driver server
	I0414 14:03:12.381493 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Close
	I0414 14:03:12.381713 2231816 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:03:12.381730 2231816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:03:12.422463 2231816 main.go:141] libmachine: Making call to close driver server
	I0414 14:03:12.422509 2231816 main.go:141] libmachine: (no-preload-496809) Calling .Close
	I0414 14:03:12.422842 2231816 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:03:12.422862 2231816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:03:12.425100 2231816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:03:10.721195 2232297 out.go:235]   - Booting up control plane ...
	I0414 14:03:10.721315 2232297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:03:10.721433 2232297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:03:10.722305 2232297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:03:10.740820 2232297 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:03:10.751685 2232297 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:03:10.751773 2232297 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:03:10.914773 2232297 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:03:10.914945 2232297 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:03:11.920619 2232297 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005274166s
	I0414 14:03:11.920765 2232297 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:03:12.425986 2231816 addons.go:514] duration metric: took 2.117013229s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:03:13.614512 2231816 pod_ready.go:103] pod "coredns-668d6bf9bc-2nj9h" in "kube-system" namespace has status "Ready":"False"
	I0414 14:03:15.952626 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:15.952944 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:12.386916 2232414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:03:12.563432 2232414 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086 for IP: 192.168.50.41
	I0414 14:03:12.563461 2232414 certs.go:194] generating shared ca certs ...
	I0414 14:03:12.563484 2232414 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:03:12.563742 2232414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:03:12.563812 2232414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:03:12.563828 2232414 certs.go:256] generating profile certs ...
	I0414 14:03:12.563973 2232414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.key
	I0414 14:03:12.564090 2232414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6
	I0414 14:03:12.564215 2232414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key
	I0414 14:03:12.564416 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:03:12.564497 2232414 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:03:12.564537 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:03:12.564594 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:03:12.564653 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:03:12.564703 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:03:12.564799 2232414 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:03:12.565720 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:03:12.708377 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:03:12.843853 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:03:12.978881 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:03:13.036521 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:03:13.096254 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:03:13.132489 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:03:13.165941 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:03:13.196223 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:03:13.288884 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:03:13.360406 2232414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:03:13.421194 2232414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:03:13.443132 2232414 ssh_runner.go:195] Run: openssl version
	I0414 14:03:13.470412 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:03:13.489194 2232414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:03:13.495415 2232414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:03:13.495489 2232414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:03:13.510745 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:03:13.544007 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:03:13.559990 2232414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:13.564832 2232414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:13.564899 2232414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:03:13.570911 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:03:13.582315 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:03:13.594697 2232414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:03:13.599828 2232414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:03:13.599899 2232414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:03:13.608479 2232414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:03:13.619998 2232414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:03:13.625131 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:03:13.631159 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:03:13.637013 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:03:13.642865 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:03:13.648636 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:03:13.654890 2232414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:03:13.660838 2232414 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:03:13.660923 2232414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:03:13.660998 2232414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:03:13.715186 2232414 cri.go:89] found id: "d68a54fd9eedd39b9796f4ee007de97c0013150e685a3acce247ab6f5be7201f"
	I0414 14:03:13.715212 2232414 cri.go:89] found id: "78e6b61c03a30f4d70edfa135b7ba2f93ccc69b043e7987cc8e4c1304aecae8b"
	I0414 14:03:13.715216 2232414 cri.go:89] found id: "73b9b12661f769ba2012e0bfe9762622a58e5ea9581eaf428f285ea36e7969ed"
	I0414 14:03:13.715219 2232414 cri.go:89] found id: "aaafd14fe772bea7478fae975289e1e2147808e4ad9f57252b379c1a8d976941"
	I0414 14:03:13.715222 2232414 cri.go:89] found id: "8165083f961aab38a986708a3f9ac845d949acb8cbcb41a0b529e063cefd3e90"
	I0414 14:03:13.715225 2232414 cri.go:89] found id: "040f14d9bfaf7549d0444249c618a80644077396632d8a11f416b34aa3dfa8a2"
	I0414 14:03:13.715229 2232414 cri.go:89] found id: "d715e94944e39e098d24cac6c5c735b3d8f487a9c0d637887c9a80a2e0543798"
	I0414 14:03:13.715231 2232414 cri.go:89] found id: "cf46e3dd697363062ce560b6346b157786527e2e951e022d5e26d03f7052d615"
	I0414 14:03:13.715234 2232414 cri.go:89] found id: "c11c00ca5327079e476e27cee839def03efe86cbbd119aa2538e3b3132fb7e6a"
	I0414 14:03:13.715246 2232414 cri.go:89] found id: "1b282e8f232c2af2c5df80284885da60a1155c07e333384c4a72613ed2738c49"
	I0414 14:03:13.715251 2232414 cri.go:89] found id: "e81e63979c808feebad2228c60341a107810bef8c1bcfc0357c153f13badd02e"
	I0414 14:03:13.715255 2232414 cri.go:89] found id: "87f071bb1a653ce3ffcbc32b2a7220d91d8ee8afdae0300ced5a8338525ac18e"
	I0414 14:03:13.715261 2232414 cri.go:89] found id: "9d7ebebbaa0c13e1a40b40c05a4d063b4f3d7db78e45a57f7167e293c9274041"
	I0414 14:03:13.715265 2232414 cri.go:89] found id: "08c0d8a515634d02fed48887de5a837b69e7f0f15d41969ab71cd9e38929399d"
	I0414 14:03:13.715269 2232414 cri.go:89] found id: ""
	I0414 14:03:13.715330 2232414 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-461086 -n kubernetes-upgrade-461086
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-461086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-461086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-461086
--- FAIL: TestKubernetesUpgrade (478.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-648153 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-648153 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.072952311s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-648153] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-648153" primary control-plane node in "pause-648153" cluster
	* Updating the running kvm2 "pause-648153" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-648153" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:00:55.268662 2231182 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:00:55.269025 2231182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:00:55.269040 2231182 out.go:358] Setting ErrFile to fd 2...
	I0414 14:00:55.269047 2231182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:00:55.269353 2231182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:00:55.270054 2231182 out.go:352] Setting JSON to false
	I0414 14:00:55.271480 2231182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168194,"bootTime":1744471061,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:00:55.271564 2231182 start.go:139] virtualization: kvm guest
	I0414 14:00:55.273961 2231182 out.go:177] * [pause-648153] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:00:55.275348 2231182 notify.go:220] Checking for updates...
	I0414 14:00:55.275356 2231182 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:00:55.276766 2231182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:00:55.278012 2231182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:00:55.279196 2231182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:00:55.280342 2231182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:00:55.281501 2231182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:00:55.283228 2231182 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:00:55.283897 2231182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:00:55.283975 2231182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:00:55.302140 2231182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0414 14:00:55.302757 2231182 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:00:55.303384 2231182 main.go:141] libmachine: Using API Version  1
	I0414 14:00:55.303412 2231182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:00:55.303884 2231182 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:00:55.304112 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:00:55.304390 2231182 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:00:55.304722 2231182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:00:55.304798 2231182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:00:55.322132 2231182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0414 14:00:55.322613 2231182 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:00:55.323229 2231182 main.go:141] libmachine: Using API Version  1
	I0414 14:00:55.323259 2231182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:00:55.323708 2231182 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:00:55.323945 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:00:55.373719 2231182 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:00:55.375206 2231182 start.go:297] selected driver: kvm2
	I0414 14:00:55.375233 2231182 start.go:901] validating driver "kvm2" against &{Name:pause-648153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-648153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:00:55.375402 2231182 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:00:55.375741 2231182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:00:55.375826 2231182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:00:55.398410 2231182 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:00:55.399657 2231182 cni.go:84] Creating CNI manager for ""
	I0414 14:00:55.399745 2231182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:00:55.399851 2231182 start.go:340] cluster config:
	{Name:pause-648153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-648153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:00:55.400101 2231182 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:00:55.402927 2231182 out.go:177] * Starting "pause-648153" primary control-plane node in "pause-648153" cluster
	I0414 14:00:55.404269 2231182 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:00:55.404346 2231182 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:00:55.404365 2231182 cache.go:56] Caching tarball of preloaded images
	I0414 14:00:55.404526 2231182 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:00:55.404546 2231182 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:00:55.404790 2231182 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/config.json ...
	I0414 14:00:55.405122 2231182 start.go:360] acquireMachinesLock for pause-648153: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:01:13.257894 2231182 start.go:364] duration metric: took 17.85272905s to acquireMachinesLock for "pause-648153"
	I0414 14:01:13.257957 2231182 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:01:13.257967 2231182 fix.go:54] fixHost starting: 
	I0414 14:01:13.258382 2231182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:01:13.258436 2231182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:13.276093 2231182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0414 14:01:13.276666 2231182 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:13.277280 2231182 main.go:141] libmachine: Using API Version  1
	I0414 14:01:13.277312 2231182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:13.277663 2231182 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:13.277843 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:13.277994 2231182 main.go:141] libmachine: (pause-648153) Calling .GetState
	I0414 14:01:13.279575 2231182 fix.go:112] recreateIfNeeded on pause-648153: state=Running err=<nil>
	W0414 14:01:13.279599 2231182 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:01:13.281410 2231182 out.go:177] * Updating the running kvm2 "pause-648153" VM ...
	I0414 14:01:13.282463 2231182 machine.go:93] provisionDockerMachine start ...
	I0414 14:01:13.282481 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:13.282718 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:13.285521 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.286146 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.286170 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.286346 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:13.286561 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.286727 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.286927 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:13.287119 2231182 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:13.287358 2231182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.188 22 <nil> <nil>}
	I0414 14:01:13.287374 2231182 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:01:13.394777 2231182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-648153
	
	I0414 14:01:13.394809 2231182 main.go:141] libmachine: (pause-648153) Calling .GetMachineName
	I0414 14:01:13.395144 2231182 buildroot.go:166] provisioning hostname "pause-648153"
	I0414 14:01:13.395182 2231182 main.go:141] libmachine: (pause-648153) Calling .GetMachineName
	I0414 14:01:13.395396 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:13.398312 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.398728 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.398760 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.398946 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:13.399112 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.399281 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.399405 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:13.399524 2231182 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:13.399745 2231182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.188 22 <nil> <nil>}
	I0414 14:01:13.399756 2231182 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-648153 && echo "pause-648153" | sudo tee /etc/hostname
	I0414 14:01:13.527595 2231182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-648153
	
	I0414 14:01:13.527626 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:13.530649 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.531012 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.531043 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.531226 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:13.531421 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.531571 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.531704 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:13.531834 2231182 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:13.532050 2231182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.188 22 <nil> <nil>}
	I0414 14:01:13.532067 2231182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-648153' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-648153/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-648153' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:01:13.639434 2231182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:01:13.639468 2231182 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:01:13.639489 2231182 buildroot.go:174] setting up certificates
	I0414 14:01:13.639501 2231182 provision.go:84] configureAuth start
	I0414 14:01:13.639516 2231182 main.go:141] libmachine: (pause-648153) Calling .GetMachineName
	I0414 14:01:13.639802 2231182 main.go:141] libmachine: (pause-648153) Calling .GetIP
	I0414 14:01:13.643149 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.643605 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.643637 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.643840 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:13.646158 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.646509 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.646544 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.646644 2231182 provision.go:143] copyHostCerts
	I0414 14:01:13.646715 2231182 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:01:13.646733 2231182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:01:13.646783 2231182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:01:13.646872 2231182 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:01:13.646880 2231182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:01:13.646904 2231182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:01:13.646961 2231182 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:01:13.646968 2231182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:01:13.646984 2231182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:01:13.647027 2231182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.pause-648153 san=[127.0.0.1 192.168.61.188 localhost minikube pause-648153]
	I0414 14:01:13.853686 2231182 provision.go:177] copyRemoteCerts
	I0414 14:01:13.853756 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:01:13.853784 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:13.856753 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.857215 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:13.857247 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:13.857436 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:13.857619 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:13.857782 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:13.857882 2231182 sshutil.go:53] new ssh client: &{IP:192.168.61.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/pause-648153/id_rsa Username:docker}
	I0414 14:01:13.943853 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:01:13.980159 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:01:14.006392 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:01:14.044293 2231182 provision.go:87] duration metric: took 404.774343ms to configureAuth
	I0414 14:01:14.044325 2231182 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:01:14.044549 2231182 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:14.044667 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:14.047866 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:14.048279 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:14.048310 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:14.048494 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:14.048699 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:14.048877 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:14.049038 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:14.049168 2231182 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:14.049389 2231182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.188 22 <nil> <nil>}
	I0414 14:01:14.049410 2231182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:01:21.306029 2231182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:01:21.306061 2231182 machine.go:96] duration metric: took 8.023583273s to provisionDockerMachine
	I0414 14:01:21.306074 2231182 start.go:293] postStartSetup for "pause-648153" (driver="kvm2")
	I0414 14:01:21.306089 2231182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:01:21.306109 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:21.306600 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:01:21.306674 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:21.310178 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.310651 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:21.310674 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.310866 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:21.311085 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:21.311280 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:21.311436 2231182 sshutil.go:53] new ssh client: &{IP:192.168.61.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/pause-648153/id_rsa Username:docker}
	I0414 14:01:21.396387 2231182 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:01:21.400791 2231182 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:01:21.400826 2231182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:01:21.400905 2231182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:01:21.401008 2231182 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:01:21.401128 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:01:21.414110 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:01:21.442535 2231182 start.go:296] duration metric: took 136.439531ms for postStartSetup
	I0414 14:01:21.442589 2231182 fix.go:56] duration metric: took 8.184623707s for fixHost
	I0414 14:01:21.442618 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:21.445632 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.445977 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:21.446008 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.446210 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:21.446444 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:21.446723 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:21.446921 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:21.447145 2231182 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:21.447409 2231182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.188 22 <nil> <nil>}
	I0414 14:01:21.447422 2231182 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:01:21.549903 2231182 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639281.543270112
	
	I0414 14:01:21.549942 2231182 fix.go:216] guest clock: 1744639281.543270112
	I0414 14:01:21.549955 2231182 fix.go:229] Guest: 2025-04-14 14:01:21.543270112 +0000 UTC Remote: 2025-04-14 14:01:21.442594264 +0000 UTC m=+26.224365686 (delta=100.675848ms)
	I0414 14:01:21.549985 2231182 fix.go:200] guest clock delta is within tolerance: 100.675848ms
	I0414 14:01:21.549993 2231182 start.go:83] releasing machines lock for "pause-648153", held for 8.292068485s
	I0414 14:01:21.550035 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:21.550373 2231182 main.go:141] libmachine: (pause-648153) Calling .GetIP
	I0414 14:01:21.553830 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.554259 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:21.554287 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.554591 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:21.555248 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:21.555477 2231182 main.go:141] libmachine: (pause-648153) Calling .DriverName
	I0414 14:01:21.555590 2231182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:01:21.555643 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:21.555775 2231182 ssh_runner.go:195] Run: cat /version.json
	I0414 14:01:21.555803 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHHostname
	I0414 14:01:21.558729 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.558858 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.559167 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:21.559191 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.559213 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:21.559228 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:21.559519 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:21.559613 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHPort
	I0414 14:01:21.559674 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:21.559776 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHKeyPath
	I0414 14:01:21.559853 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:21.559920 2231182 main.go:141] libmachine: (pause-648153) Calling .GetSSHUsername
	I0414 14:01:21.559976 2231182 sshutil.go:53] new ssh client: &{IP:192.168.61.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/pause-648153/id_rsa Username:docker}
	I0414 14:01:21.560022 2231182 sshutil.go:53] new ssh client: &{IP:192.168.61.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/pause-648153/id_rsa Username:docker}
	I0414 14:01:21.670040 2231182 ssh_runner.go:195] Run: systemctl --version
	I0414 14:01:21.676840 2231182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:01:21.841588 2231182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:01:21.850963 2231182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:01:21.851058 2231182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:01:21.860905 2231182 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 14:01:21.860937 2231182 start.go:495] detecting cgroup driver to use...
	I0414 14:01:21.861015 2231182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:01:21.883114 2231182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:01:21.899840 2231182 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:01:21.899906 2231182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:01:21.918427 2231182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:01:21.935548 2231182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:01:22.091394 2231182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:01:22.265962 2231182 docker.go:233] disabling docker service ...
	I0414 14:01:22.266050 2231182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:01:22.306650 2231182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:01:22.346194 2231182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:01:22.578718 2231182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:01:22.799031 2231182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:01:22.832122 2231182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:01:23.031128 2231182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:01:23.031225 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.062409 2231182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:01:23.062508 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.095859 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.168400 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.219356 2231182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:01:23.275496 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.318476 2231182 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.367591 2231182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:23.401793 2231182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:01:23.425482 2231182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:01:23.468417 2231182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:23.683101 2231182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:01:24.325620 2231182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:01:24.325712 2231182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:01:24.332667 2231182 start.go:563] Will wait 60s for crictl version
	I0414 14:01:24.332760 2231182 ssh_runner.go:195] Run: which crictl
	I0414 14:01:24.338268 2231182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:01:24.381257 2231182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:01:24.381348 2231182 ssh_runner.go:195] Run: crio --version
	I0414 14:01:24.414411 2231182 ssh_runner.go:195] Run: crio --version
	I0414 14:01:24.445238 2231182 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:01:24.446447 2231182 main.go:141] libmachine: (pause-648153) Calling .GetIP
	I0414 14:01:24.449515 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:24.449938 2231182 main.go:141] libmachine: (pause-648153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:e2:99", ip: ""} in network mk-pause-648153: {Iface:virbr3 ExpiryTime:2025-04-14 14:59:39 +0000 UTC Type:0 Mac:52:54:00:79:e2:99 Iaid: IPaddr:192.168.61.188 Prefix:24 Hostname:pause-648153 Clientid:01:52:54:00:79:e2:99}
	I0414 14:01:24.449963 2231182 main.go:141] libmachine: (pause-648153) DBG | domain pause-648153 has defined IP address 192.168.61.188 and MAC address 52:54:00:79:e2:99 in network mk-pause-648153
	I0414 14:01:24.450322 2231182 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 14:01:24.455421 2231182 kubeadm.go:883] updating cluster {Name:pause-648153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-648153 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:01:24.455614 2231182 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:01:24.455681 2231182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:01:24.505217 2231182 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:01:24.505244 2231182 crio.go:433] Images already preloaded, skipping extraction
	I0414 14:01:24.505323 2231182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:01:24.542162 2231182 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:01:24.542194 2231182 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:01:24.542204 2231182 kubeadm.go:934] updating node { 192.168.61.188 8443 v1.32.2 crio true true} ...
	I0414 14:01:24.542355 2231182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-648153 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-648153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:01:24.542436 2231182 ssh_runner.go:195] Run: crio config
	I0414 14:01:24.597002 2231182 cni.go:84] Creating CNI manager for ""
	I0414 14:01:24.597027 2231182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:01:24.597038 2231182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:01:24.597059 2231182 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.188 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-648153 NodeName:pause-648153 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:01:24.597186 2231182 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-648153"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.188"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.188"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:01:24.597290 2231182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:01:24.608061 2231182 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:01:24.608140 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:01:24.618623 2231182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 14:01:24.638532 2231182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:01:24.658509 2231182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0414 14:01:24.676707 2231182 ssh_runner.go:195] Run: grep 192.168.61.188	control-plane.minikube.internal$ /etc/hosts
	I0414 14:01:24.681437 2231182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:24.834545 2231182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:01:24.852632 2231182 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153 for IP: 192.168.61.188
	I0414 14:01:24.852665 2231182 certs.go:194] generating shared ca certs ...
	I0414 14:01:24.852687 2231182 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:24.852938 2231182 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:01:24.853017 2231182 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:01:24.853034 2231182 certs.go:256] generating profile certs ...
	I0414 14:01:24.853158 2231182 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.key
	I0414 14:01:24.853253 2231182 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/apiserver.key.e6e0f8a3
	I0414 14:01:24.853315 2231182 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/proxy-client.key
	I0414 14:01:24.853427 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:01:24.853457 2231182 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:01:24.853467 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:01:24.853502 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:01:24.853538 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:01:24.853570 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:01:24.853628 2231182 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:01:24.854401 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:01:24.883133 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:01:24.912118 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:01:24.940686 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:01:24.969524 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 14:01:24.998480 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0414 14:01:25.027776 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:01:25.063023 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:01:25.090724 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:01:25.115809 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:01:25.142195 2231182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:01:25.167834 2231182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:01:25.186081 2231182 ssh_runner.go:195] Run: openssl version
	I0414 14:01:25.195066 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:01:25.212524 2231182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:01:25.218727 2231182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:01:25.218803 2231182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:01:25.224824 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:01:25.234896 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:01:25.246543 2231182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:25.251315 2231182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:25.251380 2231182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:25.257731 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:01:25.269219 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:01:25.281359 2231182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:01:25.286659 2231182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:01:25.286725 2231182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:01:25.293241 2231182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:01:25.310984 2231182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:01:25.353111 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:01:25.390445 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:01:25.402833 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:01:25.463688 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:01:25.533034 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:01:25.554357 2231182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:01:25.579850 2231182 kubeadm.go:392] StartCluster: {Name:pause-648153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-648153 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:01:25.580035 2231182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:01:25.580153 2231182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:01:25.772290 2231182 cri.go:89] found id: "a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d"
	I0414 14:01:25.772323 2231182 cri.go:89] found id: "53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0"
	I0414 14:01:25.772329 2231182 cri.go:89] found id: "1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89"
	I0414 14:01:25.772333 2231182 cri.go:89] found id: "cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678"
	I0414 14:01:25.772338 2231182 cri.go:89] found id: "49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d"
	I0414 14:01:25.772342 2231182 cri.go:89] found id: "2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3"
	I0414 14:01:25.772346 2231182 cri.go:89] found id: "1e7367f7e6c315ef9ea81ce091a3bcfc29d59646a02c8e07d7990dcda27ee228"
	I0414 14:01:25.772350 2231182 cri.go:89] found id: "fa4e9733d01fcfcc4bc8776e0ea7f5dcb45965f5549a566bad5fb84ff71efce2"
	I0414 14:01:25.772355 2231182 cri.go:89] found id: "e94cc6e71687066eb9df1a9f13c147ab39ae0a9aba931e5a762d3e6e24a66246"
	I0414 14:01:25.772365 2231182 cri.go:89] found id: ""
	I0414 14:01:25.772421 2231182 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-648153 -n pause-648153
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-648153 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-648153 logs -n 25: (3.447290351s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-793608 sudo docker                         | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo find                           | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo crio                           | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-793608                                     | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p force-systemd-flag-509258                         | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-461086                         | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p pause-648153                                      | pause-648153              | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-742924                            | running-upgrade-742924    | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p kubernetes-upgrade-461086                         | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-954411                            | old-k8s-version-954411    | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-509258 ssh cat                    | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-509258                         | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	| start   | -p no-preload-496809                                 | no-preload-496809         | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:01:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:01:35.181337 2231816 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:01:35.181648 2231816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:35.181662 2231816 out.go:358] Setting ErrFile to fd 2...
	I0414 14:01:35.181669 2231816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:35.181958 2231816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:01:35.182802 2231816 out.go:352] Setting JSON to false
	I0414 14:01:35.184027 2231816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168234,"bootTime":1744471061,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:01:35.184096 2231816 start.go:139] virtualization: kvm guest
	I0414 14:01:35.185988 2231816 out.go:177] * [no-preload-496809] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:01:35.187468 2231816 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:01:35.187469 2231816 notify.go:220] Checking for updates...
	I0414 14:01:35.188914 2231816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:01:35.190144 2231816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:01:35.191434 2231816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:35.192736 2231816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:01:35.193818 2231816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:01:35.195482 2231816 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:35.195668 2231816 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:01:35.195853 2231816 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:35.195997 2231816 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:01:35.242146 2231816 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:01:35.243235 2231816 start.go:297] selected driver: kvm2
	I0414 14:01:35.243250 2231816 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:01:35.243263 2231816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:01:35.244261 2231816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.244350 2231816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:01:35.260111 2231816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:01:35.260173 2231816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:01:35.260442 2231816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:01:35.260483 2231816 cni.go:84] Creating CNI manager for ""
	I0414 14:01:35.260540 2231816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:01:35.260552 2231816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:01:35.260609 2231816 start.go:340] cluster config:
	{Name:no-preload-496809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-496809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:01:35.260760 2231816 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.262762 2231816 out.go:177] * Starting "no-preload-496809" primary control-plane node in "no-preload-496809" cluster
	I0414 14:01:32.825450 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:01:32.840650 2231182 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:01:32.867503 2231182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:01:32.874714 2231182 system_pods.go:59] 6 kube-system pods found
	I0414 14:01:32.874776 2231182 system_pods.go:61] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 14:01:32.874791 2231182 system_pods.go:61] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 14:01:32.874812 2231182 system_pods.go:61] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 14:01:32.874831 2231182 system_pods.go:61] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 14:01:32.874846 2231182 system_pods.go:61] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 14:01:32.874860 2231182 system_pods.go:61] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 14:01:32.874873 2231182 system_pods.go:74] duration metric: took 7.34202ms to wait for pod list to return data ...
	I0414 14:01:32.874884 2231182 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:01:32.880077 2231182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:01:32.880137 2231182 node_conditions.go:123] node cpu capacity is 2
	I0414 14:01:32.880157 2231182 node_conditions.go:105] duration metric: took 5.26398ms to run NodePressure ...
	I0414 14:01:32.880183 2231182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:01:33.177563 2231182 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 14:01:33.182678 2231182 kubeadm.go:739] kubelet initialised
	I0414 14:01:33.182709 2231182 kubeadm.go:740] duration metric: took 5.108017ms waiting for restarted kubelet to initialise ...
	I0414 14:01:33.182720 2231182 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:33.186569 2231182 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:34.193711 2231182 pod_ready.go:93] pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:34.193743 2231182 pod_ready.go:82] duration metric: took 1.00714443s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:34.193757 2231182 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:36.442737 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:36.443873 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 14:01:36.443990 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 14:01:36.443865 2231562 retry.go:31] will retry after 4.466060986s: waiting for domain to come up
	I0414 14:01:35.263700 2231816 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:01:35.263832 2231816 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/config.json ...
	I0414 14:01:35.263871 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/config.json: {Name:mk4733ea686e19da28de35e918d5ba0f91e27fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:35.263931 2231816 cache.go:107] acquiring lock: {Name:mk8bccd379934f87abefd6ca9cc6e0764b72a176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.263967 2231816 cache.go:107] acquiring lock: {Name:mk18f258d09625d9b461d745de6d396f14868aea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264069 2231816 start.go:360] acquireMachinesLock for no-preload-496809: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:01:35.264085 2231816 cache.go:107] acquiring lock: {Name:mk74c33da3b82a06c8113eb1f480b288acb9991d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264170 2231816 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:01:35.264143 2231816 cache.go:107] acquiring lock: {Name:mkae2e56e08b777aa8021c824fdf960ed6abaa4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264144 2231816 cache.go:107] acquiring lock: {Name:mkfbd5e4d444bc41cfae970b03510b4410bdbc22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264177 2231816 cache.go:107] acquiring lock: {Name:mk507c51444df1a037dcb1e883f106a8a46a578b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264193 2231816 cache.go:115] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0414 14:01:35.264273 2231816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.948µs
	I0414 14:01:35.264267 2231816 cache.go:107] acquiring lock: {Name:mk8569967c15be76de24392934114068f6b6f82a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264301 2231816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0414 14:01:35.264331 2231816 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:01:35.264360 2231816 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:01:35.264331 2231816 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:01:35.264510 2231816 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0414 14:01:35.264540 2231816 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:01:35.264505 2231816 cache.go:107] acquiring lock: {Name:mkefbfd236acc12d8d204e84c35f5e0182d15bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264808 2231816 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:01:35.265562 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:01:35.265641 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:01:35.265566 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:01:35.265863 2231816 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:01:35.265897 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:01:35.266063 2231816 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0414 14:01:35.266073 2231816 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:01:35.426154 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0414 14:01:35.450366 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0414 14:01:35.469250 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0414 14:01:35.483998 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0414 14:01:35.484148 2231816 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 219.911594ms
	I0414 14:01:35.484175 2231816 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0414 14:01:35.590695 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0414 14:01:35.591300 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0414 14:01:35.593512 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0414 14:01:35.701856 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0414 14:01:35.701888 2231816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 437.93472ms
	I0414 14:01:35.701899 2231816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0414 14:01:35.961462 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0414 14:01:36.703929 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0414 14:01:36.703961 2231816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 1.43994066s
	I0414 14:01:36.703973 2231816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0414 14:01:36.987967 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0414 14:01:36.988001 2231816 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 1.723900865s
	I0414 14:01:36.988019 2231816 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0414 14:01:37.036094 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0414 14:01:37.036128 2231816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.772038971s
	I0414 14:01:37.036140 2231816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0414 14:01:37.343480 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0414 14:01:37.343510 2231816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 2.079556309s
	I0414 14:01:37.343523 2231816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0414 14:01:37.450033 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0414 14:01:37.450063 2231816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.185889176s
	I0414 14:01:37.450075 2231816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0414 14:01:37.450097 2231816 cache.go:87] Successfully saved all images to host disk.
	I0414 14:01:36.200764 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:38.702400 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:42.385450 2231425 start.go:364] duration metric: took 35.396017033s to acquireMachinesLock for "old-k8s-version-954411"
	I0414 14:01:42.385550 2231425 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:01:42.385687 2231425 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:01:40.914962 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.915608 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has current primary IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.915644 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) found domain IP: 192.168.50.41
	I0414 14:01:40.915653 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) reserving static IP address...
	I0414 14:01:40.916124 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) reserved static IP address 192.168.50.41 for domain kubernetes-upgrade-461086
	I0414 14:01:40.916151 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-461086", mac: "52:54:00:66:0c:5b", ip: "192.168.50.41"} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:40.916172 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) waiting for SSH...
	I0414 14:01:40.916202 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | skip adding static IP to network mk-kubernetes-upgrade-461086 - found existing host DHCP lease matching {name: "kubernetes-upgrade-461086", mac: "52:54:00:66:0c:5b", ip: "192.168.50.41"}
	I0414 14:01:40.916224 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Getting to WaitForSSH function...
	I0414 14:01:40.918427 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.918795 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:40.918827 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.918904 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH client type: external
	I0414 14:01:40.918948 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa (-rw-------)
	I0414 14:01:40.919000 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:01:40.919017 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | About to run SSH command:
	I0414 14:01:40.919026 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | exit 0
	I0414 14:01:41.044687 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | SSH cmd err, output: <nil>: 
	I0414 14:01:41.045156 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetConfigRaw
	I0414 14:01:41.045784 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:41.048903 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.049310 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.049341 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.049567 2231322 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json ...
	I0414 14:01:41.049797 2231322 machine.go:93] provisionDockerMachine start ...
	I0414 14:01:41.049816 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:41.049995 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.052498 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.052796 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.052822 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.052956 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.053116 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.053270 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.053365 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.053496 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.053730 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.053744 2231322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:01:41.165279 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:01:41.165312 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.165627 2231322 buildroot.go:166] provisioning hostname "kubernetes-upgrade-461086"
	I0414 14:01:41.165664 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.165904 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.168764 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.169141 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.169182 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.169297 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.169499 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.169645 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.169753 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.169868 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.170160 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.170178 2231322 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-461086 && echo "kubernetes-upgrade-461086" | sudo tee /etc/hostname
	I0414 14:01:41.296252 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-461086
	
	I0414 14:01:41.296290 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.299486 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.299887 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.299920 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.300114 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.300296 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.300398 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.300511 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.300712 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.300952 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.300969 2231322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-461086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-461086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-461086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:01:41.421964 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:01:41.422001 2231322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:01:41.422026 2231322 buildroot.go:174] setting up certificates
	I0414 14:01:41.422040 2231322 provision.go:84] configureAuth start
	I0414 14:01:41.422054 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.422393 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:41.425179 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.425647 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.425694 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.425907 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.428794 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.429198 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.429238 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.429427 2231322 provision.go:143] copyHostCerts
	I0414 14:01:41.429484 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:01:41.429504 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:01:41.429562 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:01:41.429663 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:01:41.429671 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:01:41.429689 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:01:41.429763 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:01:41.429772 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:01:41.429793 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:01:41.429874 2231322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-461086 san=[127.0.0.1 192.168.50.41 kubernetes-upgrade-461086 localhost minikube]
	I0414 14:01:41.738994 2231322 provision.go:177] copyRemoteCerts
	I0414 14:01:41.739069 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:01:41.739097 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.741988 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.742340 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.742377 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.742533 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.742738 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.742886 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.743033 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:41.827151 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:01:41.853578 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:01:41.877764 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:01:41.901387 2231322 provision.go:87] duration metric: took 479.332038ms to configureAuth
	I0414 14:01:41.901428 2231322 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:01:41.901597 2231322 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:41.901676 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.904356 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.904760 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.904793 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.905117 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.905388 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.905565 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.905706 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.905856 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.906087 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.906101 2231322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:01:42.141445 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:01:42.141499 2231322 machine.go:96] duration metric: took 1.091683943s to provisionDockerMachine
	I0414 14:01:42.141516 2231322 start.go:293] postStartSetup for "kubernetes-upgrade-461086" (driver="kvm2")
	I0414 14:01:42.141531 2231322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:01:42.141566 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.141940 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:01:42.141976 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.144768 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.145117 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.145149 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.145285 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.145477 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.145658 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.145813 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.233475 2231322 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:01:42.238032 2231322 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:01:42.238063 2231322 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:01:42.238129 2231322 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:01:42.238239 2231322 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:01:42.238373 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:01:42.248142 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:01:42.272322 2231322 start.go:296] duration metric: took 130.790547ms for postStartSetup
	I0414 14:01:42.272365 2231322 fix.go:56] duration metric: took 20.722087655s for fixHost
	I0414 14:01:42.272388 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.275222 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.275485 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.275519 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.275661 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.275862 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.276024 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.276121 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.276318 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:42.276545 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:42.276558 2231322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:01:42.385265 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639302.363500668
	
	I0414 14:01:42.385295 2231322 fix.go:216] guest clock: 1744639302.363500668
	I0414 14:01:42.385305 2231322 fix.go:229] Guest: 2025-04-14 14:01:42.363500668 +0000 UTC Remote: 2025-04-14 14:01:42.2723687 +0000 UTC m=+43.619438118 (delta=91.131968ms)
	I0414 14:01:42.385334 2231322 fix.go:200] guest clock delta is within tolerance: 91.131968ms
	I0414 14:01:42.385341 2231322 start.go:83] releasing machines lock for "kubernetes-upgrade-461086", held for 20.835191505s
	I0414 14:01:42.385376 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.385678 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:42.388601 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.389027 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.389072 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.389323 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.389939 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.390137 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.390243 2231322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:01:42.390309 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.390354 2231322 ssh_runner.go:195] Run: cat /version.json
	I0414 14:01:42.390384 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.393180 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393408 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393633 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.393671 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393814 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.393910 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.393935 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393974 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.394149 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.394189 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.394310 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.394310 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.394474 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.394607 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.474122 2231322 ssh_runner.go:195] Run: systemctl --version
	I0414 14:01:42.505539 2231322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:01:42.651222 2231322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:01:42.661445 2231322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:01:42.661538 2231322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:01:42.680283 2231322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:01:42.680318 2231322 start.go:495] detecting cgroup driver to use...
	I0414 14:01:42.680386 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:01:42.697511 2231322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:01:42.712035 2231322 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:01:42.712096 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:01:42.725771 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:01:42.739270 2231322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:01:42.859060 2231322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:01:43.018755 2231322 docker.go:233] disabling docker service ...
	I0414 14:01:43.018839 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:01:43.033478 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:01:43.045921 2231322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:01:43.189045 2231322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:01:43.321155 2231322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:01:43.337744 2231322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:01:43.356199 2231322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:01:43.356284 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.366035 2231322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:01:43.366123 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.376252 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.386047 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.396559 2231322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:01:43.406935 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.417596 2231322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.434823 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.445432 2231322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:01:43.455324 2231322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:01:43.455373 2231322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:01:43.469972 2231322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:01:43.482649 2231322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:43.605961 2231322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:01:41.199453 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:43.202623 2231182 pod_ready.go:93] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.202648 2231182 pod_ready.go:82] duration metric: took 9.008882393s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.202658 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.708985 2231182 pod_ready.go:93] pod "kube-apiserver-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.709035 2231182 pod_ready.go:82] duration metric: took 506.355802ms for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.709052 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.713829 2231182 pod_ready.go:93] pod "kube-controller-manager-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.713858 2231182 pod_ready.go:82] duration metric: took 4.795969ms for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.713871 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.719206 2231182 pod_ready.go:93] pod "kube-proxy-25n6s" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.719234 2231182 pod_ready.go:82] duration metric: took 5.35544ms for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.719246 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.724666 2231182 pod_ready.go:93] pod "kube-scheduler-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.724688 2231182 pod_ready.go:82] duration metric: took 5.433234ms for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.724697 2231182 pod_ready.go:39] duration metric: took 10.541963231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:43.724719 2231182 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:01:43.738435 2231182 ops.go:34] apiserver oom_adj: -16
	I0414 14:01:43.738459 2231182 kubeadm.go:597] duration metric: took 17.836791094s to restartPrimaryControlPlane
	I0414 14:01:43.738470 2231182 kubeadm.go:394] duration metric: took 18.158639118s to StartCluster
	I0414 14:01:43.738493 2231182 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:43.738586 2231182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:01:43.739384 2231182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:43.739655 2231182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:01:43.739708 2231182 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:01:43.739886 2231182 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:43.741466 2231182 out.go:177] * Verifying Kubernetes components...
	I0414 14:01:43.741483 2231182 out.go:177] * Enabled addons: 
	I0414 14:01:43.719304 2231322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:01:43.719378 2231322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:01:43.726236 2231322 start.go:563] Will wait 60s for crictl version
	I0414 14:01:43.726311 2231322 ssh_runner.go:195] Run: which crictl
	I0414 14:01:43.730645 2231322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:01:43.786832 2231322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:01:43.786930 2231322 ssh_runner.go:195] Run: crio --version
	I0414 14:01:43.822362 2231322 ssh_runner.go:195] Run: crio --version
	I0414 14:01:43.858700 2231322 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:01:43.742844 2231182 addons.go:514] duration metric: took 3.147015ms for enable addons: enabled=[]
	I0414 14:01:43.742901 2231182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:43.945766 2231182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:01:43.965032 2231182 node_ready.go:35] waiting up to 6m0s for node "pause-648153" to be "Ready" ...
	I0414 14:01:43.968109 2231182 node_ready.go:49] node "pause-648153" has status "Ready":"True"
	I0414 14:01:43.968136 2231182 node_ready.go:38] duration metric: took 3.071347ms for node "pause-648153" to be "Ready" ...
	I0414 14:01:43.968147 2231182 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:44.000637 2231182 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.399767 2231182 pod_ready.go:93] pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:44.399804 2231182 pod_ready.go:82] duration metric: took 399.13575ms for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.399818 2231182 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.799349 2231182 pod_ready.go:93] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:44.799383 2231182 pod_ready.go:82] duration metric: took 399.556496ms for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.799398 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.198480 2231182 pod_ready.go:93] pod "kube-apiserver-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.198518 2231182 pod_ready.go:82] duration metric: took 399.110953ms for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.198533 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:42.387303 2231425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:01:42.387570 2231425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:01:42.387643 2231425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:42.408560 2231425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0414 14:01:42.409136 2231425 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:42.409723 2231425 main.go:141] libmachine: Using API Version  1
	I0414 14:01:42.409750 2231425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:42.410166 2231425 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:42.410355 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:01:42.410521 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:01:42.410672 2231425 start.go:159] libmachine.API.Create for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:01:42.410709 2231425 client.go:168] LocalClient.Create starting
	I0414 14:01:42.410743 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:01:42.410794 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410813 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410892 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:01:42.410921 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410939 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410963 2231425 main.go:141] libmachine: Running pre-create checks...
	I0414 14:01:42.410974 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .PreCreateCheck
	I0414 14:01:42.411361 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:01:42.411765 2231425 main.go:141] libmachine: Creating machine...
	I0414 14:01:42.411782 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .Create
	I0414 14:01:42.411958 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating KVM machine...
	I0414 14:01:42.411979 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating network...
	I0414 14:01:42.413183 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found existing default KVM network
	I0414 14:01:42.414239 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.414087 2231858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201190}
	I0414 14:01:42.414263 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | created network xml: 
	I0414 14:01:42.414282 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | <network>
	I0414 14:01:42.414295 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <name>mk-old-k8s-version-954411</name>
	I0414 14:01:42.414309 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <dns enable='no'/>
	I0414 14:01:42.414320 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414333 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:01:42.414344 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     <dhcp>
	I0414 14:01:42.414353 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:01:42.414365 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     </dhcp>
	I0414 14:01:42.414373 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   </ip>
	I0414 14:01:42.414386 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414395 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | </network>
	I0414 14:01:42.414404 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | 
	I0414 14:01:42.419672 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | trying to create private KVM network mk-old-k8s-version-954411 192.168.39.0/24...
	I0414 14:01:42.495567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | private KVM network mk-old-k8s-version-954411 192.168.39.0/24 created
	I0414 14:01:42.495599 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.495526 2231858 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.495614 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.495631 2231425 main.go:141] libmachine: (old-k8s-version-954411) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:01:42.495727 2231425 main.go:141] libmachine: (old-k8s-version-954411) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:01:42.779984 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.779839 2231858 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa...
	I0414 14:01:42.941486 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941322 2231858 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk...
	I0414 14:01:42.941548 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing magic tar header
	I0414 14:01:42.941570 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing SSH key tar header
	I0414 14:01:42.941589 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941479 2231858 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.941603 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411
	I0414 14:01:42.941624 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 (perms=drwx------)
	I0414 14:01:42.941642 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:01:42.941652 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.941790 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:01:42.941856 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:01:42.941870 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:01:42.941895 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:01:42.941910 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins
	I0414 14:01:42.941929 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home
	I0414 14:01:42.941942 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | skipping /home - not owner
	I0414 14:01:42.941978 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:01:42.942005 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:01:42.942019 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:01:42.942030 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:42.943234 2231425 main.go:141] libmachine: (old-k8s-version-954411) define libvirt domain using xml: 
	I0414 14:01:42.943260 2231425 main.go:141] libmachine: (old-k8s-version-954411) <domain type='kvm'>
	I0414 14:01:42.943295 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <name>old-k8s-version-954411</name>
	I0414 14:01:42.943319 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <memory unit='MiB'>2200</memory>
	I0414 14:01:42.943331 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <vcpu>2</vcpu>
	I0414 14:01:42.943342 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <features>
	I0414 14:01:42.943353 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <acpi/>
	I0414 14:01:42.943364 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <apic/>
	I0414 14:01:42.943378 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <pae/>
	I0414 14:01:42.943393 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943402 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </features>
	I0414 14:01:42.943413 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <cpu mode='host-passthrough'>
	I0414 14:01:42.943425 2231425 main.go:141] libmachine: (old-k8s-version-954411)   
	I0414 14:01:42.943433 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </cpu>
	I0414 14:01:42.943442 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <os>
	I0414 14:01:42.943453 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <type>hvm</type>
	I0414 14:01:42.943476 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='cdrom'/>
	I0414 14:01:42.943496 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='hd'/>
	I0414 14:01:42.943525 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <bootmenu enable='no'/>
	I0414 14:01:42.943535 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </os>
	I0414 14:01:42.943544 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <devices>
	I0414 14:01:42.943556 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='cdrom'>
	I0414 14:01:42.943587 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/boot2docker.iso'/>
	I0414 14:01:42.943601 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hdc' bus='scsi'/>
	I0414 14:01:42.943607 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <readonly/>
	I0414 14:01:42.943615 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943624 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='disk'>
	I0414 14:01:42.943644 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:01:42.943664 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk'/>
	I0414 14:01:42.943677 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hda' bus='virtio'/>
	I0414 14:01:42.943688 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943699 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943710 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='mk-old-k8s-version-954411'/>
	I0414 14:01:42.943722 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943735 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943747 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943757 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='default'/>
	I0414 14:01:42.943765 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943775 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943784 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <serial type='pty'>
	I0414 14:01:42.943794 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target port='0'/>
	I0414 14:01:42.943802 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </serial>
	I0414 14:01:42.943812 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <console type='pty'>
	I0414 14:01:42.943821 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target type='serial' port='0'/>
	I0414 14:01:42.943835 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </console>
	I0414 14:01:42.943847 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <rng model='virtio'>
	I0414 14:01:42.943858 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <backend model='random'>/dev/random</backend>
	I0414 14:01:42.943868 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </rng>
	I0414 14:01:42.943877 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943885 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943894 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </devices>
	I0414 14:01:42.943902 2231425 main.go:141] libmachine: (old-k8s-version-954411) </domain>
	I0414 14:01:42.943915 2231425 main.go:141] libmachine: (old-k8s-version-954411) 
	I0414 14:01:42.947328 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:c0:7b:40 in network default
	I0414 14:01:42.948005 2231425 main.go:141] libmachine: (old-k8s-version-954411) starting domain...
	I0414 14:01:42.948024 2231425 main.go:141] libmachine: (old-k8s-version-954411) ensuring networks are active...
	I0414 14:01:42.948036 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:42.948755 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network default is active
	I0414 14:01:42.949156 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network mk-old-k8s-version-954411 is active
	I0414 14:01:42.949711 2231425 main.go:141] libmachine: (old-k8s-version-954411) getting domain XML...
	I0414 14:01:42.950550 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:44.322603 2231425 main.go:141] libmachine: (old-k8s-version-954411) waiting for IP...
	I0414 14:01:44.323750 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.324363 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.324410 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.324348 2231858 retry.go:31] will retry after 279.076334ms: waiting for domain to come up
	I0414 14:01:44.605212 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.605923 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.605954 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.605856 2231858 retry.go:31] will retry after 254.872686ms: waiting for domain to come up
	I0414 14:01:44.862616 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.863190 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.863226 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.863176 2231858 retry.go:31] will retry after 298.853913ms: waiting for domain to come up
	I0414 14:01:45.164114 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.164912 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.164985 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.164894 2231858 retry.go:31] will retry after 536.754794ms: waiting for domain to come up
	I0414 14:01:45.703716 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.704247 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.704275 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.704222 2231858 retry.go:31] will retry after 518.01594ms: waiting for domain to come up
	I0414 14:01:46.224061 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:46.224567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:46.224597 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:46.224521 2231858 retry.go:31] will retry after 811.819388ms: waiting for domain to come up
	I0414 14:01:45.599708 2231182 pod_ready.go:93] pod "kube-controller-manager-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.599737 2231182 pod_ready.go:82] duration metric: took 401.195662ms for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.599751 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.998940 2231182 pod_ready.go:93] pod "kube-proxy-25n6s" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.998972 2231182 pod_ready.go:82] duration metric: took 399.212322ms for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.998986 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:46.403125 2231182 pod_ready.go:93] pod "kube-scheduler-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:46.403157 2231182 pod_ready.go:82] duration metric: took 404.162334ms for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:46.403190 2231182 pod_ready.go:39] duration metric: took 2.435009698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:46.403219 2231182 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:01:46.403293 2231182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:01:46.423832 2231182 api_server.go:72] duration metric: took 2.684132791s to wait for apiserver process to appear ...
	I0414 14:01:46.423906 2231182 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:01:46.423934 2231182 api_server.go:253] Checking apiserver healthz at https://192.168.61.188:8443/healthz ...
	I0414 14:01:46.429977 2231182 api_server.go:279] https://192.168.61.188:8443/healthz returned 200:
	ok
	I0414 14:01:46.431273 2231182 api_server.go:141] control plane version: v1.32.2
	I0414 14:01:46.431301 2231182 api_server.go:131] duration metric: took 7.385091ms to wait for apiserver health ...
	I0414 14:01:46.431316 2231182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:01:46.599462 2231182 system_pods.go:59] 6 kube-system pods found
	I0414 14:01:46.599496 2231182 system_pods.go:61] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running
	I0414 14:01:46.599501 2231182 system_pods.go:61] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running
	I0414 14:01:46.599505 2231182 system_pods.go:61] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running
	I0414 14:01:46.599508 2231182 system_pods.go:61] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running
	I0414 14:01:46.599511 2231182 system_pods.go:61] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running
	I0414 14:01:46.599515 2231182 system_pods.go:61] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running
	I0414 14:01:46.599523 2231182 system_pods.go:74] duration metric: took 168.198382ms to wait for pod list to return data ...
	I0414 14:01:46.599530 2231182 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:01:46.798786 2231182 default_sa.go:45] found service account: "default"
	I0414 14:01:46.798833 2231182 default_sa.go:55] duration metric: took 199.294389ms for default service account to be created ...
	I0414 14:01:46.798849 2231182 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:01:46.999534 2231182 system_pods.go:86] 6 kube-system pods found
	I0414 14:01:46.999574 2231182 system_pods.go:89] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running
	I0414 14:01:46.999582 2231182 system_pods.go:89] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running
	I0414 14:01:46.999588 2231182 system_pods.go:89] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running
	I0414 14:01:46.999594 2231182 system_pods.go:89] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running
	I0414 14:01:46.999598 2231182 system_pods.go:89] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running
	I0414 14:01:46.999603 2231182 system_pods.go:89] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running
	I0414 14:01:46.999614 2231182 system_pods.go:126] duration metric: took 200.756417ms to wait for k8s-apps to be running ...
	I0414 14:01:46.999623 2231182 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:01:46.999681 2231182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:01:47.022600 2231182 system_svc.go:56] duration metric: took 22.952442ms WaitForService to wait for kubelet
	I0414 14:01:47.022642 2231182 kubeadm.go:582] duration metric: took 3.282951586s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:01:47.022669 2231182 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:01:47.199081 2231182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:01:47.199126 2231182 node_conditions.go:123] node cpu capacity is 2
	I0414 14:01:47.199148 2231182 node_conditions.go:105] duration metric: took 176.469819ms to run NodePressure ...
	I0414 14:01:47.199164 2231182 start.go:241] waiting for startup goroutines ...
	I0414 14:01:47.199174 2231182 start.go:246] waiting for cluster config update ...
	I0414 14:01:47.199185 2231182 start.go:255] writing updated cluster config ...
	I0414 14:01:47.199518 2231182 ssh_runner.go:195] Run: rm -f paused
	I0414 14:01:47.264037 2231182 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:01:47.266794 2231182 out.go:177] * Done! kubectl is now configured to use "pause-648153" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.043844036Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-547jp,Uid:1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285846066535,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T14:00:15.328220640Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-648153,Uid:7ea445b470a6db809de2c0cc6a99f4b0,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285522445876,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ea445b470a6db809de2c0cc6a99f4b0,kubernetes.io/config.seen: 2025-04-14T14:00:09.767612953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&PodSandboxMetadata{Name:kube-proxy-25n6s,Uid:8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285486442052,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,k8s-app: k
ube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T14:00:15.212002358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&PodSandboxMetadata{Name:etcd-pause-648153,Uid:e22cb0ce8096984b632fd88aa5fc36ae,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285458130484,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.188:2379,kubernetes.io/config.hash: e22cb0ce8096984b632fd88aa5fc36ae,kubernetes.io/config.seen: 2025-04-14T14:00:09.767616711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64d
a33f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-648153,Uid:977ef6515169eed38b9ed7443d502bdd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285392494441,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.188:8443,kubernetes.io/config.hash: 977ef6515169eed38b9ed7443d502bdd,kubernetes.io/config.seen: 2025-04-14T14:00:09.767618072Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-648153,Uid:5e02c1612d07445bf37eea8fbba07efd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744639285372966599,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5e02c1612d07445bf37eea8fbba07efd,kubernetes.io/config.seen: 2025-04-14T14:00:09.767619058Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-648153,Uid:5e02c1612d07445bf37eea8fbba07efd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744639282321861222,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,tier: control-plane,},Annotations:map[stri
ng]string{kubernetes.io/config.hash: 5e02c1612d07445bf37eea8fbba07efd,kubernetes.io/config.seen: 2025-04-14T14:00:09.767619058Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-648153,Uid:977ef6515169eed38b9ed7443d502bdd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744639282318236153,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.188:8443,kubernetes.io/config.hash: 977ef6515169eed38b9ed7443d502bdd,kubernetes.io/config.seen: 2025-04-14T14:00:09.767618072Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0623693a3eaad57f6e
6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&PodSandboxMetadata{Name:etcd-pause-648153,Uid:e22cb0ce8096984b632fd88aa5fc36ae,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744639282289988176,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.188:2379,kubernetes.io/config.hash: e22cb0ce8096984b632fd88aa5fc36ae,kubernetes.io/config.seen: 2025-04-14T14:00:09.767616711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&PodSandboxMetadata{Name:kube-proxy-25n6s,Uid:8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744639282284444292,Labels:map[string]string{c
ontroller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T14:00:15.212002358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-648153,Uid:7ea445b470a6db809de2c0cc6a99f4b0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744639282266703067,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ea445b470a6db809de2c0cc6a99f4b0,kubernetes
.io/config.seen: 2025-04-14T14:00:09.767612953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-547jp,Uid:1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1744639215691296278,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T14:00:15.328220640Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d8fe9493-8636-4b87-b130-5ece3f593f2a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.044622871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=699c0bbb-3d79-48a2-a8f1-ecb152a2053d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.044678669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=699c0bbb-3d79-48a2-a8f1-ecb152a2053d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.044989074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=699c0bbb-3d79-48a2-a8f1-ecb152a2053d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.055451130Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4026164f-72c4-4d1f-98ed-cee1999af7a4 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.055521599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4026164f-72c4-4d1f-98ed-cee1999af7a4 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.056395230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=100c7eed-dd0e-4a09-b4d3-bd85a324f547 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.057052853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639308057030079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=100c7eed-dd0e-4a09-b4d3-bd85a324f547 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.057869529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a819fa04-9591-42e9-be52-3ad45122aaca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.057920218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a819fa04-9591-42e9-be52-3ad45122aaca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.058303310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a819fa04-9591-42e9-be52-3ad45122aaca name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.112138785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=401f52da-b0fc-477c-8c8f-5323bab998f4 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.112235488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=401f52da-b0fc-477c-8c8f-5323bab998f4 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.114424231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cafa23c-9001-440e-8bd6-fda07fadfa9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.115179453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639308115143239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cafa23c-9001-440e-8bd6-fda07fadfa9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.115954936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=462ec3ff-63ad-4985-9d7e-d3b8f32b9aac name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.116061385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=462ec3ff-63ad-4985-9d7e-d3b8f32b9aac name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.116389229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=462ec3ff-63ad-4985-9d7e-d3b8f32b9aac name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.162422054Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49c39874-4421-4add-a280-ec82286991a9 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.162493850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49c39874-4421-4add-a280-ec82286991a9 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.164006654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d46c361-bc34-4d57-a447-5ee4235ba3ed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.164365471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639308164341514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d46c361-bc34-4d57-a447-5ee4235ba3ed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.164914482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5208914-81e4-4917-9de4-46b3c71a8965 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.164963428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5208914-81e4-4917-9de4-46b3c71a8965 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:48 pause-648153 crio[2982]: time="2025-04-14 14:01:48.165267356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5208914-81e4-4917-9de4-46b3c71a8965 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8754ae5c4800f       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   16 seconds ago       Running             kube-proxy                2                   203dfc2d964c5       kube-proxy-25n6s
	9911ef4ec8193       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago       Running             coredns                   1                   d31ffebaa536d       coredns-668d6bf9bc-547jp
	66a72cc76bdc2       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   19 seconds ago       Running             kube-scheduler            2                   379d8e077c5a2       kube-scheduler-pause-648153
	3b27895b4bdec       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   19 seconds ago       Running             kube-apiserver            2                   c761c6aad896b       kube-apiserver-pause-648153
	980fca4f660be       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago       Running             etcd                      2                   cc8ac8dcf17a5       etcd-pause-648153
	eba7dd29999fa       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   19 seconds ago       Running             kube-controller-manager   2                   8bff14b15bb85       kube-controller-manager-pause-648153
	a72f3a7c9af9b       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   25 seconds ago       Exited              kube-controller-manager   1                   0eab3308f6db7       kube-controller-manager-pause-648153
	53812db1656a8       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   25 seconds ago       Exited              kube-proxy                1                   88d265729b343       kube-proxy-25n6s
	1325a769984c7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   25 seconds ago       Exited              etcd                      1                   0623693a3eaad       etcd-pause-648153
	cce231159b965       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   25 seconds ago       Exited              kube-apiserver            1                   2c4072a442203       kube-apiserver-pause-648153
	49a768cc6261e       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   25 seconds ago       Exited              kube-scheduler            1                   5f429143a3928       kube-scheduler-pause-648153
	2a7e70afba867       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   4b43a30216a1d       coredns-668d6bf9bc-547jp
	
	
	==> coredns [2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1032249385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.777) (total time: 30006ms):
	Trace[1032249385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (14:00:46.782)
	Trace[1032249385]: [30.006164861s] [30.006164861s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[513769814]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.777) (total time: 30006ms):
	Trace[513769814]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (14:00:46.782)
	Trace[513769814]: [30.006466232s] [30.006466232s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1550010925]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.781) (total time: 30002ms):
	Trace[1550010925]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:00:46.783)
	Trace[1550010925]: [30.002250999s] [30.002250999s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41803 - 17721 "HINFO IN 8248068367183701146.3141727108954957871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010731802s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34440 - 18667 "HINFO IN 3602844357350121543.6231432446882578563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011900804s
	
	
	==> describe nodes <==
	Name:               pause-648153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-648153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88
	                    minikube.k8s.io/name=pause-648153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_00_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:00:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-648153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.188
	  Hostname:    pause-648153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b92df190d95c446491ec92767e777450
	  System UUID:                b92df190-d95c-4464-91ec-92767e777450
	  Boot ID:                    08a123f5-fd8d-497e-9169-5bb85fece951
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-547jp                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-pause-648153                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-648153             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-pause-648153    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-25n6s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-648153             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 91s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     99s                kubelet          Node pause-648153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-648153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node pause-648153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeReady                98s                kubelet          Node pause-648153 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-648153 event: Registered Node pause-648153 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node pause-648153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node pause-648153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node pause-648153 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node pause-648153 event: Registered Node pause-648153 in Controller
	
	
	==> dmesg <==
	[  +9.895027] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072030] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.189336] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.125071] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.281335] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.569322] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +0.064550] kauditd_printk_skb: 130 callbacks suppressed
	[Apr14 14:00] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +1.218817] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.846395] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.096582] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.885342] systemd-fstab-generator[1380]: Ignoring "noauto" option for root device
	[  +0.142392] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.757389] kauditd_printk_skb: 88 callbacks suppressed
	[Apr14 14:01] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.157935] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.268423] systemd-fstab-generator[2427]: Ignoring "noauto" option for root device
	[  +0.259603] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +0.867748] systemd-fstab-generator[2885]: Ignoring "noauto" option for root device
	[  +1.185821] systemd-fstab-generator[3152]: Ignoring "noauto" option for root device
	[  +2.706873] systemd-fstab-generator[3612]: Ignoring "noauto" option for root device
	[  +0.089522] kauditd_printk_skb: 238 callbacks suppressed
	[  +5.106943] kauditd_printk_skb: 48 callbacks suppressed
	[ +11.168354] systemd-fstab-generator[4078]: Ignoring "noauto" option for root device
	
	
	==> etcd [1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89] <==
	{"level":"info","ts":"2025-04-14T14:01:23.385548Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-14T14:01:23.453323Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","commit-index":430}
	{"level":"info","ts":"2025-04-14T14:01:23.453554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-14T14:01:23.453714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became follower at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:23.453811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a1d4c90ecc3171ce [peers: [], term: 2, commit: 430, applied: 0, lastindex: 430, lastterm: 2]"}
	{"level":"warn","ts":"2025-04-14T14:01:23.468132Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-14T14:01:23.521294Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":402}
	{"level":"info","ts":"2025-04-14T14:01:23.531062Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-14T14:01:23.538627Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a1d4c90ecc3171ce","timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:01:23.539344Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a1d4c90ecc3171ce"}
	{"level":"info","ts":"2025-04-14T14:01:23.539525Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"a1d4c90ecc3171ce","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T14:01:23.540710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:23.541113Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T14:01:23.541320Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541468Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce switched to configuration voters=(11661166400561574350)"}
	{"level":"info","ts":"2025-04-14T14:01:23.541970Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","added-peer-id":"a1d4c90ecc3171ce","added-peer-peer-urls":["https://192.168.61.188:2380"]}
	{"level":"info","ts":"2025-04-14T14:01:23.542064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:23.542100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:23.544424Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:01:23.556174Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:23.556209Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:23.565469Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a1d4c90ecc3171ce","initial-advertise-peer-urls":["https://192.168.61.188:2380"],"listen-peer-urls":["https://192.168.61.188:2380"],"advertise-client-urls":["https://192.168.61.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:01:23.565529Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10] <==
	{"level":"info","ts":"2025-04-14T14:01:28.897443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","added-peer-id":"a1d4c90ecc3171ce","added-peer-peer-urls":["https://192.168.61.188:2380"]}
	{"level":"info","ts":"2025-04-14T14:01:28.897537Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:28.897577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:28.897980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:28.902403Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:01:28.904076Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a1d4c90ecc3171ce","initial-advertise-peer-urls":["https://192.168.61.188:2380"],"listen-peer-urls":["https://192.168.61.188:2380"],"advertise-client-urls":["https://192.168.61.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:01:28.904137Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:01:28.904218Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:28.904240Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:30.161123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce received MsgPreVoteResp from a1d4c90ecc3171ce at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce received MsgVoteResp from a1d4c90ecc3171ce at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4c90ecc3171ce elected leader a1d4c90ecc3171ce at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.165963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:01:30.166456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:01:30.165967Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a1d4c90ecc3171ce","local-member-attributes":"{Name:pause-648153 ClientURLs:[https://192.168.61.188:2379]}","request-path":"/0/members/a1d4c90ecc3171ce/attributes","cluster-id":"5178d02fe96ee090","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:01:30.166969Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:01:30.167031Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:01:30.167418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:30.167596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:30.168617Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.188:2379"}
	{"level":"info","ts":"2025-04-14T14:01:30.168669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:01:48 up 2 min,  0 users,  load average: 1.75, 0.61, 0.22
	Linux pause-648153 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b] <==
	I0414 14:01:31.666983       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:01:31.679099       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:01:31.679192       1 policy_source.go:240] refreshing policies
	I0414 14:01:31.679256       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:01:31.679280       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:01:31.679315       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:01:31.679339       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:01:31.679358       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:01:31.679373       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:01:31.681955       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:01:31.682447       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:01:31.692987       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:01:31.697622       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:01:31.704033       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:01:31.737912       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:01:31.740499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:01:31.751282       1 shared_informer.go:320] Caches are synced for configmaps
	E0414 14:01:31.793577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 14:01:32.461665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:01:33.064861       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:01:33.111040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:01:33.150260       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:01:33.157331       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:01:34.866403       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:01:35.165487       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678] <==
	W0414 14:01:23.467591       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0414 14:01:23.473930       1 options.go:238] external host was not specified, using 192.168.61.188
	I0414 14:01:23.481303       1 server.go:143] Version: v1.32.2
	I0414 14:01:23.482128       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d] <==
	
	
	==> kube-controller-manager [eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4] <==
	I0414 14:01:34.876259       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 14:01:34.876383       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-648153"
	I0414 14:01:34.880151       1 shared_informer.go:320] Caches are synced for deployment
	I0414 14:01:34.882597       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0414 14:01:34.888038       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 14:01:34.891517       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:01:34.899860       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0414 14:01:34.907942       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0414 14:01:34.908004       1 shared_informer.go:320] Caches are synced for disruption
	I0414 14:01:34.908140       1 shared_informer.go:320] Caches are synced for HPA
	I0414 14:01:34.909334       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0414 14:01:34.910665       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0414 14:01:34.910836       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0414 14:01:34.910868       1 shared_informer.go:320] Caches are synced for taint
	I0414 14:01:34.910935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0414 14:01:34.910981       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0414 14:01:34.911074       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-648153"
	I0414 14:01:34.911162       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0414 14:01:34.911270       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0414 14:01:34.912957       1 shared_informer.go:320] Caches are synced for ephemeral
	I0414 14:01:34.916164       1 shared_informer.go:320] Caches are synced for TTL
	I0414 14:01:34.919574       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 14:01:34.919611       1 shared_informer.go:320] Caches are synced for job
	I0414 14:01:34.922208       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 14:01:34.925691       1 shared_informer.go:320] Caches are synced for attach detach
	
	
	==> kube-proxy [53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0] <==
	
	
	==> kube-proxy [8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:01:32.386618       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:01:32.397977       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.188"]
	E0414 14:01:32.398092       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:01:32.437305       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:01:32.437353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:01:32.437377       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:01:32.440899       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:01:32.441365       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:01:32.441723       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:01:32.444350       1 config.go:199] "Starting service config controller"
	I0414 14:01:32.444408       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:01:32.444436       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:01:32.444440       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:01:32.448022       1 config.go:329] "Starting node config controller"
	I0414 14:01:32.448097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:01:32.544801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:01:32.544806       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:01:32.548614       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d] <==
	
	
	==> kube-scheduler [66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4] <==
	I0414 14:01:29.662323       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:01:31.644020       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:01:31.644109       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:01:31.644133       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:01:31.644152       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:01:31.668050       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:01:31.668188       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:01:31.670532       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:01:31.670637       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:01:31.671303       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:01:31.670650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:01:31.772523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:01:30 pause-648153 kubelet[3619]: E0414 14:01:30.960981    3619 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-648153\" not found" node="pause-648153"
	Apr 14 14:01:30 pause-648153 kubelet[3619]: E0414 14:01:30.962955    3619 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-648153\" not found" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.623834    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.699767    3619 apiserver.go:52] "Watching apiserver"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716602    3619 kubelet_node_status.go:125] "Node was previously registered" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716790    3619 kubelet_node_status.go:79] "Successfully registered node" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716814    3619 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.718192    3619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.726019    3619 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.726908    3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8400d3e1-b5ba-49a2-b916-fe8d6188fd6a-xtables-lock\") pod \"kube-proxy-25n6s\" (UID: \"8400d3e1-b5ba-49a2-b916-fe8d6188fd6a\") " pod="kube-system/kube-proxy-25n6s"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.727007    3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8400d3e1-b5ba-49a2-b916-fe8d6188fd6a-lib-modules\") pod \"kube-proxy-25n6s\" (UID: \"8400d3e1-b5ba-49a2-b916-fe8d6188fd6a\") " pod="kube-system/kube-proxy-25n6s"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.787881    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-648153\" already exists" pod="kube-system/etcd-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.788049    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.802343    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-648153\" already exists" pod="kube-system/kube-apiserver-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.802464    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.825988    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-648153\" already exists" pod="kube-system/kube-controller-manager-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.826197    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.846486    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-648153\" already exists" pod="kube-system/kube-scheduler-pause-648153"
	Apr 14 14:01:32 pause-648153 kubelet[3619]: I0414 14:01:32.008379    3619 scope.go:117] "RemoveContainer" containerID="2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3"
	Apr 14 14:01:32 pause-648153 kubelet[3619]: I0414 14:01:32.009571    3619 scope.go:117] "RemoveContainer" containerID="53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0"
	Apr 14 14:01:34 pause-648153 kubelet[3619]: I0414 14:01:34.156510    3619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 14 14:01:37 pause-648153 kubelet[3619]: E0414 14:01:37.875719    3619 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639297875401737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:37 pause-648153 kubelet[3619]: E0414 14:01:37.875792    3619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639297875401737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:47 pause-648153 kubelet[3619]: E0414 14:01:47.878518    3619 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639307878235157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:47 pause-648153 kubelet[3619]: E0414 14:01:47.878549    3619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639307878235157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-648153 -n pause-648153
helpers_test.go:261: (dbg) Run:  kubectl --context pause-648153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-648153 -n pause-648153
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-648153 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-648153 logs -n 25: (1.621761554s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-793608 sudo docker                         | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo cat                            | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo                                | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo find                           | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-793608 sudo crio                           | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-793608                                     | cilium-793608             | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p force-systemd-flag-509258                         | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-461086                         | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p pause-648153                                      | pause-648153              | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:01 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-742924                            | running-upgrade-742924    | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC | 14 Apr 25 14:00 UTC |
	| start   | -p kubernetes-upgrade-461086                         | kubernetes-upgrade-461086 | jenkins | v1.35.0 | 14 Apr 25 14:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-954411                            | old-k8s-version-954411    | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-509258 ssh cat                    | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-509258                         | force-systemd-flag-509258 | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC | 14 Apr 25 14:01 UTC |
	| start   | -p no-preload-496809                                 | no-preload-496809         | jenkins | v1.35.0 | 14 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:01:35
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:01:35.181337 2231816 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:01:35.181648 2231816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:35.181662 2231816 out.go:358] Setting ErrFile to fd 2...
	I0414 14:01:35.181669 2231816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:35.181958 2231816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:01:35.182802 2231816 out.go:352] Setting JSON to false
	I0414 14:01:35.184027 2231816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168234,"bootTime":1744471061,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:01:35.184096 2231816 start.go:139] virtualization: kvm guest
	I0414 14:01:35.185988 2231816 out.go:177] * [no-preload-496809] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:01:35.187468 2231816 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:01:35.187469 2231816 notify.go:220] Checking for updates...
	I0414 14:01:35.188914 2231816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:01:35.190144 2231816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:01:35.191434 2231816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:35.192736 2231816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:01:35.193818 2231816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:01:35.195482 2231816 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:35.195668 2231816 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:01:35.195853 2231816 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:35.195997 2231816 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:01:35.242146 2231816 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:01:35.243235 2231816 start.go:297] selected driver: kvm2
	I0414 14:01:35.243250 2231816 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:01:35.243263 2231816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:01:35.244261 2231816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.244350 2231816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:01:35.260111 2231816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:01:35.260173 2231816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:01:35.260442 2231816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:01:35.260483 2231816 cni.go:84] Creating CNI manager for ""
	I0414 14:01:35.260540 2231816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:01:35.260552 2231816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:01:35.260609 2231816 start.go:340] cluster config:
	{Name:no-preload-496809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-496809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:01:35.260760 2231816 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.262762 2231816 out.go:177] * Starting "no-preload-496809" primary control-plane node in "no-preload-496809" cluster
	I0414 14:01:32.825450 2231182 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:01:32.840650 2231182 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:01:32.867503 2231182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:01:32.874714 2231182 system_pods.go:59] 6 kube-system pods found
	I0414 14:01:32.874776 2231182 system_pods.go:61] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 14:01:32.874791 2231182 system_pods.go:61] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 14:01:32.874812 2231182 system_pods.go:61] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 14:01:32.874831 2231182 system_pods.go:61] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 14:01:32.874846 2231182 system_pods.go:61] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 14:01:32.874860 2231182 system_pods.go:61] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 14:01:32.874873 2231182 system_pods.go:74] duration metric: took 7.34202ms to wait for pod list to return data ...
	I0414 14:01:32.874884 2231182 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:01:32.880077 2231182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:01:32.880137 2231182 node_conditions.go:123] node cpu capacity is 2
	I0414 14:01:32.880157 2231182 node_conditions.go:105] duration metric: took 5.26398ms to run NodePressure ...
	I0414 14:01:32.880183 2231182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:01:33.177563 2231182 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 14:01:33.182678 2231182 kubeadm.go:739] kubelet initialised
	I0414 14:01:33.182709 2231182 kubeadm.go:740] duration metric: took 5.108017ms waiting for restarted kubelet to initialise ...
	I0414 14:01:33.182720 2231182 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:33.186569 2231182 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:34.193711 2231182 pod_ready.go:93] pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:34.193743 2231182 pod_ready.go:82] duration metric: took 1.00714443s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:34.193757 2231182 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:36.442737 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:36.443873 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | unable to find current IP address of domain kubernetes-upgrade-461086 in network mk-kubernetes-upgrade-461086
	I0414 14:01:36.443990 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | I0414 14:01:36.443865 2231562 retry.go:31] will retry after 4.466060986s: waiting for domain to come up
	I0414 14:01:35.263700 2231816 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:01:35.263832 2231816 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/config.json ...
	I0414 14:01:35.263871 2231816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/config.json: {Name:mk4733ea686e19da28de35e918d5ba0f91e27fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:35.263931 2231816 cache.go:107] acquiring lock: {Name:mk8bccd379934f87abefd6ca9cc6e0764b72a176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.263967 2231816 cache.go:107] acquiring lock: {Name:mk18f258d09625d9b461d745de6d396f14868aea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264069 2231816 start.go:360] acquireMachinesLock for no-preload-496809: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:01:35.264085 2231816 cache.go:107] acquiring lock: {Name:mk74c33da3b82a06c8113eb1f480b288acb9991d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264170 2231816 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:01:35.264143 2231816 cache.go:107] acquiring lock: {Name:mkae2e56e08b777aa8021c824fdf960ed6abaa4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264144 2231816 cache.go:107] acquiring lock: {Name:mkfbd5e4d444bc41cfae970b03510b4410bdbc22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264177 2231816 cache.go:107] acquiring lock: {Name:mk507c51444df1a037dcb1e883f106a8a46a578b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264193 2231816 cache.go:115] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0414 14:01:35.264273 2231816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.948µs
	I0414 14:01:35.264267 2231816 cache.go:107] acquiring lock: {Name:mk8569967c15be76de24392934114068f6b6f82a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264301 2231816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0414 14:01:35.264331 2231816 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:01:35.264360 2231816 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:01:35.264331 2231816 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:01:35.264510 2231816 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0414 14:01:35.264540 2231816 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:01:35.264505 2231816 cache.go:107] acquiring lock: {Name:mkefbfd236acc12d8d204e84c35f5e0182d15bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:35.264808 2231816 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:01:35.265562 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 14:01:35.265641 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 14:01:35.265566 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 14:01:35.265863 2231816 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 14:01:35.265897 2231816 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 14:01:35.266063 2231816 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0414 14:01:35.266073 2231816 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0414 14:01:35.426154 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0414 14:01:35.450366 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0414 14:01:35.469250 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0414 14:01:35.483998 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0414 14:01:35.484148 2231816 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 219.911594ms
	I0414 14:01:35.484175 2231816 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0414 14:01:35.590695 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0414 14:01:35.591300 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0414 14:01:35.593512 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0414 14:01:35.701856 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0414 14:01:35.701888 2231816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 437.93472ms
	I0414 14:01:35.701899 2231816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0414 14:01:35.961462 2231816 cache.go:162] opening:  /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0414 14:01:36.703929 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0414 14:01:36.703961 2231816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 1.43994066s
	I0414 14:01:36.703973 2231816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0414 14:01:36.987967 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0414 14:01:36.988001 2231816 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 1.723900865s
	I0414 14:01:36.988019 2231816 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0414 14:01:37.036094 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0414 14:01:37.036128 2231816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.772038971s
	I0414 14:01:37.036140 2231816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0414 14:01:37.343480 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0414 14:01:37.343510 2231816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 2.079556309s
	I0414 14:01:37.343523 2231816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0414 14:01:37.450033 2231816 cache.go:157] /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0414 14:01:37.450063 2231816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.185889176s
	I0414 14:01:37.450075 2231816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0414 14:01:37.450097 2231816 cache.go:87] Successfully saved all images to host disk.
	I0414 14:01:36.200764 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:38.702400 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:42.385450 2231425 start.go:364] duration metric: took 35.396017033s to acquireMachinesLock for "old-k8s-version-954411"
	I0414 14:01:42.385550 2231425 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:01:42.385687 2231425 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:01:40.914962 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.915608 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has current primary IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.915644 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) found domain IP: 192.168.50.41
	I0414 14:01:40.915653 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) reserving static IP address...
	I0414 14:01:40.916124 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) reserved static IP address 192.168.50.41 for domain kubernetes-upgrade-461086
	I0414 14:01:40.916151 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-461086", mac: "52:54:00:66:0c:5b", ip: "192.168.50.41"} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:40.916172 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) waiting for SSH...
	I0414 14:01:40.916202 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | skip adding static IP to network mk-kubernetes-upgrade-461086 - found existing host DHCP lease matching {name: "kubernetes-upgrade-461086", mac: "52:54:00:66:0c:5b", ip: "192.168.50.41"}
	I0414 14:01:40.916224 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Getting to WaitForSSH function...
	I0414 14:01:40.918427 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.918795 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:40.918827 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:40.918904 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH client type: external
	I0414 14:01:40.918948 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa (-rw-------)
	I0414 14:01:40.919000 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:01:40.919017 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | About to run SSH command:
	I0414 14:01:40.919026 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | exit 0
	I0414 14:01:41.044687 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | SSH cmd err, output: <nil>: 
	I0414 14:01:41.045156 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetConfigRaw
	I0414 14:01:41.045784 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:41.048903 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.049310 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.049341 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.049567 2231322 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/config.json ...
	I0414 14:01:41.049797 2231322 machine.go:93] provisionDockerMachine start ...
	I0414 14:01:41.049816 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:41.049995 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.052498 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.052796 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.052822 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.052956 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.053116 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.053270 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.053365 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.053496 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.053730 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.053744 2231322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:01:41.165279 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:01:41.165312 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.165627 2231322 buildroot.go:166] provisioning hostname "kubernetes-upgrade-461086"
	I0414 14:01:41.165664 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.165904 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.168764 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.169141 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.169182 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.169297 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.169499 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.169645 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.169753 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.169868 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.170160 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.170178 2231322 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-461086 && echo "kubernetes-upgrade-461086" | sudo tee /etc/hostname
	I0414 14:01:41.296252 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-461086
	
	I0414 14:01:41.296290 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.299486 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.299887 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.299920 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.300114 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.300296 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.300398 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.300511 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.300712 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.300952 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.300969 2231322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-461086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-461086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-461086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:01:41.421964 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:01:41.422001 2231322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:01:41.422026 2231322 buildroot.go:174] setting up certificates
	I0414 14:01:41.422040 2231322 provision.go:84] configureAuth start
	I0414 14:01:41.422054 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetMachineName
	I0414 14:01:41.422393 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:41.425179 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.425647 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.425694 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.425907 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.428794 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.429198 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.429238 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.429427 2231322 provision.go:143] copyHostCerts
	I0414 14:01:41.429484 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:01:41.429504 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:01:41.429562 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:01:41.429663 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:01:41.429671 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:01:41.429689 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:01:41.429763 2231322 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:01:41.429772 2231322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:01:41.429793 2231322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:01:41.429874 2231322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-461086 san=[127.0.0.1 192.168.50.41 kubernetes-upgrade-461086 localhost minikube]
	I0414 14:01:41.738994 2231322 provision.go:177] copyRemoteCerts
	I0414 14:01:41.739069 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:01:41.739097 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.741988 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.742340 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.742377 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.742533 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.742738 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.742886 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.743033 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:41.827151 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:01:41.853578 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:01:41.877764 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:01:41.901387 2231322 provision.go:87] duration metric: took 479.332038ms to configureAuth
	I0414 14:01:41.901428 2231322 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:01:41.901597 2231322 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:41.901676 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:41.904356 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.904760 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:41.904793 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:41.905117 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:41.905388 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.905565 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:41.905706 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:41.905856 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:41.906087 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:41.906101 2231322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:01:42.141445 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:01:42.141499 2231322 machine.go:96] duration metric: took 1.091683943s to provisionDockerMachine
	I0414 14:01:42.141516 2231322 start.go:293] postStartSetup for "kubernetes-upgrade-461086" (driver="kvm2")
	I0414 14:01:42.141531 2231322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:01:42.141566 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.141940 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:01:42.141976 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.144768 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.145117 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.145149 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.145285 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.145477 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.145658 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.145813 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.233475 2231322 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:01:42.238032 2231322 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:01:42.238063 2231322 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:01:42.238129 2231322 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:01:42.238239 2231322 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:01:42.238373 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:01:42.248142 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:01:42.272322 2231322 start.go:296] duration metric: took 130.790547ms for postStartSetup
	I0414 14:01:42.272365 2231322 fix.go:56] duration metric: took 20.722087655s for fixHost
	I0414 14:01:42.272388 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.275222 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.275485 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.275519 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.275661 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.275862 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.276024 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.276121 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.276318 2231322 main.go:141] libmachine: Using SSH client type: native
	I0414 14:01:42.276545 2231322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.41 22 <nil> <nil>}
	I0414 14:01:42.276558 2231322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:01:42.385265 2231322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639302.363500668
	
	I0414 14:01:42.385295 2231322 fix.go:216] guest clock: 1744639302.363500668
	I0414 14:01:42.385305 2231322 fix.go:229] Guest: 2025-04-14 14:01:42.363500668 +0000 UTC Remote: 2025-04-14 14:01:42.2723687 +0000 UTC m=+43.619438118 (delta=91.131968ms)
	I0414 14:01:42.385334 2231322 fix.go:200] guest clock delta is within tolerance: 91.131968ms
	I0414 14:01:42.385341 2231322 start.go:83] releasing machines lock for "kubernetes-upgrade-461086", held for 20.835191505s
	I0414 14:01:42.385376 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.385678 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:42.388601 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.389027 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.389072 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.389323 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.389939 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.390137 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .DriverName
	I0414 14:01:42.390243 2231322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:01:42.390309 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.390354 2231322 ssh_runner.go:195] Run: cat /version.json
	I0414 14:01:42.390384 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHHostname
	I0414 14:01:42.393180 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393408 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393633 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.393671 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393814 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.393910 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:42.393935 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:42.393974 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.394149 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHPort
	I0414 14:01:42.394189 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.394310 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHKeyPath
	I0414 14:01:42.394310 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.394474 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetSSHUsername
	I0414 14:01:42.394607 2231322 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/kubernetes-upgrade-461086/id_rsa Username:docker}
	I0414 14:01:42.474122 2231322 ssh_runner.go:195] Run: systemctl --version
	I0414 14:01:42.505539 2231322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:01:42.651222 2231322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:01:42.661445 2231322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:01:42.661538 2231322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:01:42.680283 2231322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:01:42.680318 2231322 start.go:495] detecting cgroup driver to use...
	I0414 14:01:42.680386 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:01:42.697511 2231322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:01:42.712035 2231322 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:01:42.712096 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:01:42.725771 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:01:42.739270 2231322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:01:42.859060 2231322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:01:43.018755 2231322 docker.go:233] disabling docker service ...
	I0414 14:01:43.018839 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:01:43.033478 2231322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:01:43.045921 2231322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:01:43.189045 2231322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:01:43.321155 2231322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:01:43.337744 2231322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:01:43.356199 2231322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:01:43.356284 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.366035 2231322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:01:43.366123 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.376252 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.386047 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.396559 2231322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:01:43.406935 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.417596 2231322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.434823 2231322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:01:43.445432 2231322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:01:43.455324 2231322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:01:43.455373 2231322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:01:43.469972 2231322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:01:43.482649 2231322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:43.605961 2231322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:01:41.199453 2231182 pod_ready.go:103] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"False"
	I0414 14:01:43.202623 2231182 pod_ready.go:93] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.202648 2231182 pod_ready.go:82] duration metric: took 9.008882393s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.202658 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.708985 2231182 pod_ready.go:93] pod "kube-apiserver-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.709035 2231182 pod_ready.go:82] duration metric: took 506.355802ms for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.709052 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.713829 2231182 pod_ready.go:93] pod "kube-controller-manager-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.713858 2231182 pod_ready.go:82] duration metric: took 4.795969ms for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.713871 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.719206 2231182 pod_ready.go:93] pod "kube-proxy-25n6s" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.719234 2231182 pod_ready.go:82] duration metric: took 5.35544ms for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.719246 2231182 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.724666 2231182 pod_ready.go:93] pod "kube-scheduler-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:43.724688 2231182 pod_ready.go:82] duration metric: took 5.433234ms for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:43.724697 2231182 pod_ready.go:39] duration metric: took 10.541963231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:43.724719 2231182 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:01:43.738435 2231182 ops.go:34] apiserver oom_adj: -16
	I0414 14:01:43.738459 2231182 kubeadm.go:597] duration metric: took 17.836791094s to restartPrimaryControlPlane
	I0414 14:01:43.738470 2231182 kubeadm.go:394] duration metric: took 18.158639118s to StartCluster
	I0414 14:01:43.738493 2231182 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:43.738586 2231182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:01:43.739384 2231182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:43.739655 2231182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.188 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:01:43.739708 2231182 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:01:43.739886 2231182 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:43.741466 2231182 out.go:177] * Verifying Kubernetes components...
	I0414 14:01:43.741483 2231182 out.go:177] * Enabled addons: 
	I0414 14:01:43.719304 2231322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:01:43.719378 2231322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:01:43.726236 2231322 start.go:563] Will wait 60s for crictl version
	I0414 14:01:43.726311 2231322 ssh_runner.go:195] Run: which crictl
	I0414 14:01:43.730645 2231322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:01:43.786832 2231322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:01:43.786930 2231322 ssh_runner.go:195] Run: crio --version
	I0414 14:01:43.822362 2231322 ssh_runner.go:195] Run: crio --version
	I0414 14:01:43.858700 2231322 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:01:43.742844 2231182 addons.go:514] duration metric: took 3.147015ms for enable addons: enabled=[]
	I0414 14:01:43.742901 2231182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:43.945766 2231182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:01:43.965032 2231182 node_ready.go:35] waiting up to 6m0s for node "pause-648153" to be "Ready" ...
	I0414 14:01:43.968109 2231182 node_ready.go:49] node "pause-648153" has status "Ready":"True"
	I0414 14:01:43.968136 2231182 node_ready.go:38] duration metric: took 3.071347ms for node "pause-648153" to be "Ready" ...
	I0414 14:01:43.968147 2231182 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:44.000637 2231182 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.399767 2231182 pod_ready.go:93] pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:44.399804 2231182 pod_ready.go:82] duration metric: took 399.13575ms for pod "coredns-668d6bf9bc-547jp" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.399818 2231182 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.799349 2231182 pod_ready.go:93] pod "etcd-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:44.799383 2231182 pod_ready.go:82] duration metric: took 399.556496ms for pod "etcd-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:44.799398 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.198480 2231182 pod_ready.go:93] pod "kube-apiserver-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.198518 2231182 pod_ready.go:82] duration metric: took 399.110953ms for pod "kube-apiserver-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.198533 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:42.387303 2231425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:01:42.387570 2231425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:01:42.387643 2231425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:42.408560 2231425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0414 14:01:42.409136 2231425 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:42.409723 2231425 main.go:141] libmachine: Using API Version  1
	I0414 14:01:42.409750 2231425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:42.410166 2231425 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:42.410355 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:01:42.410521 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:01:42.410672 2231425 start.go:159] libmachine.API.Create for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:01:42.410709 2231425 client.go:168] LocalClient.Create starting
	I0414 14:01:42.410743 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:01:42.410794 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410813 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410892 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:01:42.410921 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410939 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410963 2231425 main.go:141] libmachine: Running pre-create checks...
	I0414 14:01:42.410974 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .PreCreateCheck
	I0414 14:01:42.411361 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:01:42.411765 2231425 main.go:141] libmachine: Creating machine...
	I0414 14:01:42.411782 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .Create
	I0414 14:01:42.411958 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating KVM machine...
	I0414 14:01:42.411979 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating network...
	I0414 14:01:42.413183 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found existing default KVM network
	I0414 14:01:42.414239 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.414087 2231858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201190}
	I0414 14:01:42.414263 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | created network xml: 
	I0414 14:01:42.414282 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | <network>
	I0414 14:01:42.414295 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <name>mk-old-k8s-version-954411</name>
	I0414 14:01:42.414309 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <dns enable='no'/>
	I0414 14:01:42.414320 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414333 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:01:42.414344 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     <dhcp>
	I0414 14:01:42.414353 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:01:42.414365 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     </dhcp>
	I0414 14:01:42.414373 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   </ip>
	I0414 14:01:42.414386 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414395 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | </network>
	I0414 14:01:42.414404 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | 
	I0414 14:01:42.419672 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | trying to create private KVM network mk-old-k8s-version-954411 192.168.39.0/24...
	I0414 14:01:42.495567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | private KVM network mk-old-k8s-version-954411 192.168.39.0/24 created
	I0414 14:01:42.495599 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.495526 2231858 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.495614 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.495631 2231425 main.go:141] libmachine: (old-k8s-version-954411) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:01:42.495727 2231425 main.go:141] libmachine: (old-k8s-version-954411) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:01:42.779984 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.779839 2231858 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa...
	I0414 14:01:42.941486 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941322 2231858 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk...
	I0414 14:01:42.941548 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing magic tar header
	I0414 14:01:42.941570 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing SSH key tar header
	I0414 14:01:42.941589 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941479 2231858 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.941603 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411
	I0414 14:01:42.941624 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 (perms=drwx------)
	I0414 14:01:42.941642 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:01:42.941652 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.941790 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:01:42.941856 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:01:42.941870 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:01:42.941895 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:01:42.941910 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins
	I0414 14:01:42.941929 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home
	I0414 14:01:42.941942 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | skipping /home - not owner
	I0414 14:01:42.941978 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:01:42.942005 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:01:42.942019 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:01:42.942030 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:42.943234 2231425 main.go:141] libmachine: (old-k8s-version-954411) define libvirt domain using xml: 
	I0414 14:01:42.943260 2231425 main.go:141] libmachine: (old-k8s-version-954411) <domain type='kvm'>
	I0414 14:01:42.943295 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <name>old-k8s-version-954411</name>
	I0414 14:01:42.943319 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <memory unit='MiB'>2200</memory>
	I0414 14:01:42.943331 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <vcpu>2</vcpu>
	I0414 14:01:42.943342 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <features>
	I0414 14:01:42.943353 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <acpi/>
	I0414 14:01:42.943364 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <apic/>
	I0414 14:01:42.943378 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <pae/>
	I0414 14:01:42.943393 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943402 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </features>
	I0414 14:01:42.943413 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <cpu mode='host-passthrough'>
	I0414 14:01:42.943425 2231425 main.go:141] libmachine: (old-k8s-version-954411)   
	I0414 14:01:42.943433 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </cpu>
	I0414 14:01:42.943442 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <os>
	I0414 14:01:42.943453 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <type>hvm</type>
	I0414 14:01:42.943476 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='cdrom'/>
	I0414 14:01:42.943496 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='hd'/>
	I0414 14:01:42.943525 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <bootmenu enable='no'/>
	I0414 14:01:42.943535 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </os>
	I0414 14:01:42.943544 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <devices>
	I0414 14:01:42.943556 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='cdrom'>
	I0414 14:01:42.943587 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/boot2docker.iso'/>
	I0414 14:01:42.943601 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hdc' bus='scsi'/>
	I0414 14:01:42.943607 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <readonly/>
	I0414 14:01:42.943615 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943624 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='disk'>
	I0414 14:01:42.943644 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:01:42.943664 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk'/>
	I0414 14:01:42.943677 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hda' bus='virtio'/>
	I0414 14:01:42.943688 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943699 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943710 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='mk-old-k8s-version-954411'/>
	I0414 14:01:42.943722 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943735 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943747 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943757 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='default'/>
	I0414 14:01:42.943765 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943775 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943784 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <serial type='pty'>
	I0414 14:01:42.943794 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target port='0'/>
	I0414 14:01:42.943802 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </serial>
	I0414 14:01:42.943812 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <console type='pty'>
	I0414 14:01:42.943821 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target type='serial' port='0'/>
	I0414 14:01:42.943835 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </console>
	I0414 14:01:42.943847 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <rng model='virtio'>
	I0414 14:01:42.943858 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <backend model='random'>/dev/random</backend>
	I0414 14:01:42.943868 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </rng>
	I0414 14:01:42.943877 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943885 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943894 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </devices>
	I0414 14:01:42.943902 2231425 main.go:141] libmachine: (old-k8s-version-954411) </domain>
	I0414 14:01:42.943915 2231425 main.go:141] libmachine: (old-k8s-version-954411) 
	I0414 14:01:42.947328 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:c0:7b:40 in network default
	I0414 14:01:42.948005 2231425 main.go:141] libmachine: (old-k8s-version-954411) starting domain...
	I0414 14:01:42.948024 2231425 main.go:141] libmachine: (old-k8s-version-954411) ensuring networks are active...
	I0414 14:01:42.948036 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:42.948755 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network default is active
	I0414 14:01:42.949156 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network mk-old-k8s-version-954411 is active
	I0414 14:01:42.949711 2231425 main.go:141] libmachine: (old-k8s-version-954411) getting domain XML...
	I0414 14:01:42.950550 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:44.322603 2231425 main.go:141] libmachine: (old-k8s-version-954411) waiting for IP...
	I0414 14:01:44.323750 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.324363 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.324410 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.324348 2231858 retry.go:31] will retry after 279.076334ms: waiting for domain to come up
	I0414 14:01:44.605212 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.605923 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.605954 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.605856 2231858 retry.go:31] will retry after 254.872686ms: waiting for domain to come up
	I0414 14:01:44.862616 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.863190 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.863226 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.863176 2231858 retry.go:31] will retry after 298.853913ms: waiting for domain to come up
	I0414 14:01:45.164114 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.164912 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.164985 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.164894 2231858 retry.go:31] will retry after 536.754794ms: waiting for domain to come up
	I0414 14:01:45.703716 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.704247 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.704275 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.704222 2231858 retry.go:31] will retry after 518.01594ms: waiting for domain to come up
	I0414 14:01:46.224061 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:46.224567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:46.224597 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:46.224521 2231858 retry.go:31] will retry after 811.819388ms: waiting for domain to come up
	I0414 14:01:45.599708 2231182 pod_ready.go:93] pod "kube-controller-manager-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.599737 2231182 pod_ready.go:82] duration metric: took 401.195662ms for pod "kube-controller-manager-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.599751 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.998940 2231182 pod_ready.go:93] pod "kube-proxy-25n6s" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:45.998972 2231182 pod_ready.go:82] duration metric: took 399.212322ms for pod "kube-proxy-25n6s" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:45.998986 2231182 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:46.403125 2231182 pod_ready.go:93] pod "kube-scheduler-pause-648153" in "kube-system" namespace has status "Ready":"True"
	I0414 14:01:46.403157 2231182 pod_ready.go:82] duration metric: took 404.162334ms for pod "kube-scheduler-pause-648153" in "kube-system" namespace to be "Ready" ...
	I0414 14:01:46.403190 2231182 pod_ready.go:39] duration metric: took 2.435009698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:01:46.403219 2231182 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:01:46.403293 2231182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:01:46.423832 2231182 api_server.go:72] duration metric: took 2.684132791s to wait for apiserver process to appear ...
	I0414 14:01:46.423906 2231182 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:01:46.423934 2231182 api_server.go:253] Checking apiserver healthz at https://192.168.61.188:8443/healthz ...
	I0414 14:01:46.429977 2231182 api_server.go:279] https://192.168.61.188:8443/healthz returned 200:
	ok
	I0414 14:01:46.431273 2231182 api_server.go:141] control plane version: v1.32.2
	I0414 14:01:46.431301 2231182 api_server.go:131] duration metric: took 7.385091ms to wait for apiserver health ...
	I0414 14:01:46.431316 2231182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:01:46.599462 2231182 system_pods.go:59] 6 kube-system pods found
	I0414 14:01:46.599496 2231182 system_pods.go:61] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running
	I0414 14:01:46.599501 2231182 system_pods.go:61] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running
	I0414 14:01:46.599505 2231182 system_pods.go:61] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running
	I0414 14:01:46.599508 2231182 system_pods.go:61] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running
	I0414 14:01:46.599511 2231182 system_pods.go:61] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running
	I0414 14:01:46.599515 2231182 system_pods.go:61] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running
	I0414 14:01:46.599523 2231182 system_pods.go:74] duration metric: took 168.198382ms to wait for pod list to return data ...
	I0414 14:01:46.599530 2231182 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:01:46.798786 2231182 default_sa.go:45] found service account: "default"
	I0414 14:01:46.798833 2231182 default_sa.go:55] duration metric: took 199.294389ms for default service account to be created ...
	I0414 14:01:46.798849 2231182 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:01:46.999534 2231182 system_pods.go:86] 6 kube-system pods found
	I0414 14:01:46.999574 2231182 system_pods.go:89] "coredns-668d6bf9bc-547jp" [1e9d901a-c53e-4a1d-9e5b-cb668fc9c105] Running
	I0414 14:01:46.999582 2231182 system_pods.go:89] "etcd-pause-648153" [4234866f-0e92-46ef-942b-9f0f226eda75] Running
	I0414 14:01:46.999588 2231182 system_pods.go:89] "kube-apiserver-pause-648153" [4e676b12-2146-4f5c-a2ac-bc90525b5ee1] Running
	I0414 14:01:46.999594 2231182 system_pods.go:89] "kube-controller-manager-pause-648153" [0af79766-a5a2-4ea7-b82e-1258520095ba] Running
	I0414 14:01:46.999598 2231182 system_pods.go:89] "kube-proxy-25n6s" [8400d3e1-b5ba-49a2-b916-fe8d6188fd6a] Running
	I0414 14:01:46.999603 2231182 system_pods.go:89] "kube-scheduler-pause-648153" [609512ad-5b0f-4810-ab03-4655c7bac009] Running
	I0414 14:01:46.999614 2231182 system_pods.go:126] duration metric: took 200.756417ms to wait for k8s-apps to be running ...
	I0414 14:01:46.999623 2231182 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:01:46.999681 2231182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:01:47.022600 2231182 system_svc.go:56] duration metric: took 22.952442ms WaitForService to wait for kubelet
	I0414 14:01:47.022642 2231182 kubeadm.go:582] duration metric: took 3.282951586s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:01:47.022669 2231182 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:01:47.199081 2231182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:01:47.199126 2231182 node_conditions.go:123] node cpu capacity is 2
	I0414 14:01:47.199148 2231182 node_conditions.go:105] duration metric: took 176.469819ms to run NodePressure ...
	I0414 14:01:47.199164 2231182 start.go:241] waiting for startup goroutines ...
	I0414 14:01:47.199174 2231182 start.go:246] waiting for cluster config update ...
	I0414 14:01:47.199185 2231182 start.go:255] writing updated cluster config ...
	I0414 14:01:47.199518 2231182 ssh_runner.go:195] Run: rm -f paused
	I0414 14:01:47.264037 2231182 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:01:47.266794 2231182 out.go:177] * Done! kubectl is now configured to use "pause-648153" cluster and "default" namespace by default
	I0414 14:01:43.859796 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) Calling .GetIP
	I0414 14:01:43.863272 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:43.863669 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:0c:5b", ip: ""} in network mk-kubernetes-upgrade-461086: {Iface:virbr4 ExpiryTime:2025-04-14 15:01:34 +0000 UTC Type:0 Mac:52:54:00:66:0c:5b Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:kubernetes-upgrade-461086 Clientid:01:52:54:00:66:0c:5b}
	I0414 14:01:43.863711 2231322 main.go:141] libmachine: (kubernetes-upgrade-461086) DBG | domain kubernetes-upgrade-461086 has defined IP address 192.168.50.41 and MAC address 52:54:00:66:0c:5b in network mk-kubernetes-upgrade-461086
	I0414 14:01:43.863968 2231322 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 14:01:43.868607 2231322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:01:43.881572 2231322 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-461086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:01:43.881708 2231322 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:01:43.881771 2231322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:01:43.937233 2231322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:01:43.937327 2231322 ssh_runner.go:195] Run: which lz4
	I0414 14:01:43.943159 2231322 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:01:43.948221 2231322 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:01:43.948254 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:01:45.444329 2231322 crio.go:462] duration metric: took 1.501210068s to copy over tarball
	I0414 14:01:45.444423 2231322 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:01:47.773200 2231322 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.328740439s)
	I0414 14:01:47.773236 2231322 crio.go:469] duration metric: took 2.328865801s to extract the tarball
	I0414 14:01:47.773247 2231322 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:01:47.811458 2231322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:01:47.856621 2231322 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:01:47.856650 2231322 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:01:47.856667 2231322 kubeadm.go:934] updating node { 192.168.50.41 8443 v1.32.2 crio true true} ...
	I0414 14:01:47.856820 2231322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-461086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-461086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:01:47.856914 2231322 ssh_runner.go:195] Run: crio config
	I0414 14:01:47.918795 2231322 cni.go:84] Creating CNI manager for ""
	I0414 14:01:47.918822 2231322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:01:47.918865 2231322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:01:47.918886 2231322 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.41 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-461086 NodeName:kubernetes-upgrade-461086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:01:47.919008 2231322 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-461086"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:01:47.919079 2231322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:01:47.930468 2231322 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:01:47.930579 2231322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:01:47.940843 2231322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0414 14:01:47.958865 2231322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:01:47.980759 2231322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0414 14:01:48.002880 2231322 ssh_runner.go:195] Run: grep 192.168.50.41	control-plane.minikube.internal$ /etc/hosts
	I0414 14:01:48.007176 2231322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:01:48.021101 2231322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:01:48.158885 2231322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:01:48.180323 2231322 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086 for IP: 192.168.50.41
	I0414 14:01:48.180350 2231322 certs.go:194] generating shared ca certs ...
	I0414 14:01:48.180373 2231322 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:48.180579 2231322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:01:48.180638 2231322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:01:48.180653 2231322 certs.go:256] generating profile certs ...
	I0414 14:01:48.180809 2231322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/client.key
	I0414 14:01:48.180885 2231322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key.105b5bc6
	I0414 14:01:48.180938 2231322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key
	I0414 14:01:48.181136 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:01:48.181183 2231322 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:01:48.181196 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:01:48.181231 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:01:48.181273 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:01:48.181306 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:01:48.181365 2231322 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:01:48.182149 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:01:48.216095 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:01:48.261681 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:01:48.294637 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:01:48.329615 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:01:48.362801 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:01:48.396480 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:01:48.441465 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kubernetes-upgrade-461086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:01:48.469624 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:01:48.493370 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:01:48.519214 2231322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:01:48.546508 2231322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:01:48.564807 2231322 ssh_runner.go:195] Run: openssl version
	I0414 14:01:48.570567 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:01:48.581351 2231322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:01:48.586145 2231322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:01:48.586214 2231322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:01:48.592468 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:01:48.605069 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:01:48.618090 2231322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:48.623365 2231322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:48.623439 2231322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:01:48.629817 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:01:48.642280 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:01:48.656404 2231322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:01:48.661741 2231322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:01:48.661824 2231322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:01:48.667643 2231322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:01:48.681842 2231322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:01:48.687021 2231322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:01:48.694436 2231322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	
	
	==> CRI-O <==
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.150059466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bea5012-9c20-4ef8-929c-cc18c3576650 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.150320937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bea5012-9c20-4ef8-929c-cc18c3576650 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.172029065Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=a75982ad-f8bc-4d31-a3cf-1405e53c8b29 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.172113700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a75982ad-f8bc-4d31-a3cf-1405e53c8b29 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.193446191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cda9df87-9d4c-4a84-b737-b59f47154b13 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.193515551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cda9df87-9d4c-4a84-b737-b59f47154b13 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.194472512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2c78e97-7434-461c-9212-0da1ae548ace name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.194909522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639312194884650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2c78e97-7434-461c-9212-0da1ae548ace name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.195265564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=178d50a3-a04a-4f0a-b6c9-c7c7dbfc3625 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.195313295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=178d50a3-a04a-4f0a-b6c9-c7c7dbfc3625 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.195553779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=178d50a3-a04a-4f0a-b6c9-c7c7dbfc3625 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.244987006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39bd0aa6-70b9-40c1-9b2f-59c2153e111c name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.245090526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39bd0aa6-70b9-40c1-9b2f-59c2153e111c name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.247140020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed0bb318-0405-4dd5-81b2-a81199e55f93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.247658079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639312247623470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed0bb318-0405-4dd5-81b2-a81199e55f93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.248472447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0807d5d-b438-4afd-b03f-cf53207aeee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.248543557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0807d5d-b438-4afd-b03f-cf53207aeee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.249011692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0807d5d-b438-4afd-b03f-cf53207aeee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.305135668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20e57060-dda1-4e34-b5f9-453ee84a3f46 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.305253345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20e57060-dda1-4e34-b5f9-453ee84a3f46 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.307618179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ff08e69-45c4-4506-94ea-38d1abb59f4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.308222479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639312308187806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ff08e69-45c4-4506-94ea-38d1abb59f4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.311030278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f918a4aa-c51d-4602-b04e-3a3f25eb7189 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.311106727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f918a4aa-c51d-4602-b04e-3a3f25eb7189 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:01:52 pause-648153 crio[2982]: time="2025-04-14 14:01:52.311394183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834,PodSandboxId:203dfc2d964c580384d7c6bdebdfb79d590661788fcf654bc2e95c9a1b379206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744639292047558151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86,PodSandboxId:d31ffebaa536dce9882b2c65c9b013fa728c86b4309ac57646ce0c23af2488fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744639292031376039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b,PodSandboxId:c761c6aad896b4be27b4f3d690eb10e1e63bc73b91270abfbcf27143f64da33f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744639288432601722,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef651516
9eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4,PodSandboxId:8bff14b15bb8534ddfd602a5722a795cbf8323bcc111b8fb6af61fa9a24d1407,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744639288402930498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10,PodSandboxId:cc8ac8dcf17a58e5b4867aa66f21f596f65b2ca214c27741e23c138c1577296b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744639288419456703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4,PodSandboxId:379d8e077c5a25b8a9f0a02b9bbf82525aaaa05d4446490bbdde4c8e403e0268,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744639288444444642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0,PodSandboxId:88d265729b343e0aa15e1eadd7c0f497d433a750f0b827a3b6a2bf2bb9a1ec4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744639282963032600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25n6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8400d3e1-b5ba-49a2-b916-fe8d6188fd6a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d,PodSandboxId:0eab3308f6db74472336f9faf9d8612b5c8ffe5ce16747be4dc9a465765d91fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744639282985516952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e02c1612d07445bf37eea8fbba07efd,},Annotations:map[string]string{io.kubernetes.container.hash:
51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678,PodSandboxId:2c4072a4422034e3886a773f60af0b24ef65b5e4f389971c5c50c5905282a7ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744639282861091804,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977ef6515169eed38b9ed7443d502bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89,PodSandboxId:0623693a3eaad57f6e6f792fceade9a23dce466e478091b19cc37f37ee7910d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744639282899457723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e22cb0ce8096984b632fd88aa5fc36ae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d,PodSandboxId:5f429143a39281a4196184cb37045d7a4686d33e43f71737e9358747dc040950,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744639282659625848,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-648153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea445b470a6db809de2c0cc6a99f4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3,PodSandboxId:4b43a30216a1d94b23079cbc5a2b6a04ab737e869ccd6962049f46c184118cd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744639216495676613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-547jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9d901a-c53e-4a1d-9e5b-cb668fc9c105,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f918a4aa-c51d-4602-b04e-3a3f25eb7189 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8754ae5c4800f       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   20 seconds ago       Running             kube-proxy                2                   203dfc2d964c5       kube-proxy-25n6s
	9911ef4ec8193       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   20 seconds ago       Running             coredns                   1                   d31ffebaa536d       coredns-668d6bf9bc-547jp
	66a72cc76bdc2       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   23 seconds ago       Running             kube-scheduler            2                   379d8e077c5a2       kube-scheduler-pause-648153
	3b27895b4bdec       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   23 seconds ago       Running             kube-apiserver            2                   c761c6aad896b       kube-apiserver-pause-648153
	980fca4f660be       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago       Running             etcd                      2                   cc8ac8dcf17a5       etcd-pause-648153
	eba7dd29999fa       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   24 seconds ago       Running             kube-controller-manager   2                   8bff14b15bb85       kube-controller-manager-pause-648153
	a72f3a7c9af9b       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   29 seconds ago       Exited              kube-controller-manager   1                   0eab3308f6db7       kube-controller-manager-pause-648153
	53812db1656a8       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   29 seconds ago       Exited              kube-proxy                1                   88d265729b343       kube-proxy-25n6s
	1325a769984c7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   29 seconds ago       Exited              etcd                      1                   0623693a3eaad       etcd-pause-648153
	cce231159b965       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   29 seconds ago       Exited              kube-apiserver            1                   2c4072a442203       kube-apiserver-pause-648153
	49a768cc6261e       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   29 seconds ago       Exited              kube-scheduler            1                   5f429143a3928       kube-scheduler-pause-648153
	2a7e70afba867       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   4b43a30216a1d       coredns-668d6bf9bc-547jp
	
	
	==> coredns [2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1032249385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.777) (total time: 30006ms):
	Trace[1032249385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (14:00:46.782)
	Trace[1032249385]: [30.006164861s] [30.006164861s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[513769814]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.777) (total time: 30006ms):
	Trace[513769814]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (14:00:46.782)
	Trace[513769814]: [30.006466232s] [30.006466232s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1550010925]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:00:16.781) (total time: 30002ms):
	Trace[1550010925]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:00:46.783)
	Trace[1550010925]: [30.002250999s] [30.002250999s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41803 - 17721 "HINFO IN 8248068367183701146.3141727108954957871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010731802s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9911ef4ec8193c8108f56966aaf9cf59202bdc9aae18b1592cc693ba2e429a86] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34440 - 18667 "HINFO IN 3602844357350121543.6231432446882578563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011900804s
	
	
	==> describe nodes <==
	Name:               pause-648153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-648153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88
	                    minikube.k8s.io/name=pause-648153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_00_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:00:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-648153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:01:31 +0000   Mon, 14 Apr 2025 14:00:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.188
	  Hostname:    pause-648153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b92df190d95c446491ec92767e777450
	  System UUID:                b92df190-d95c-4464-91ec-92767e777450
	  Boot ID:                    08a123f5-fd8d-497e-9169-5bb85fece951
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-547jp                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     97s
	  kube-system                 etcd-pause-648153                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         105s
	  kube-system                 kube-apiserver-pause-648153             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-pause-648153    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-25n6s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-pause-648153             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     103s               kubelet          Node pause-648153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s               kubelet          Node pause-648153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s               kubelet          Node pause-648153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 103s               kubelet          Starting kubelet.
	  Normal  NodeReady                102s               kubelet          Node pause-648153 status is now: NodeReady
	  Normal  RegisteredNode           98s                node-controller  Node pause-648153 event: Registered Node pause-648153 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node pause-648153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node pause-648153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node pause-648153 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node pause-648153 event: Registered Node pause-648153 in Controller
	
	
	==> dmesg <==
	[  +9.895027] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072030] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.189336] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.125071] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.281335] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.569322] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +0.064550] kauditd_printk_skb: 130 callbacks suppressed
	[Apr14 14:00] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +1.218817] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.846395] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.096582] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.885342] systemd-fstab-generator[1380]: Ignoring "noauto" option for root device
	[  +0.142392] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.757389] kauditd_printk_skb: 88 callbacks suppressed
	[Apr14 14:01] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.157935] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.268423] systemd-fstab-generator[2427]: Ignoring "noauto" option for root device
	[  +0.259603] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +0.867748] systemd-fstab-generator[2885]: Ignoring "noauto" option for root device
	[  +1.185821] systemd-fstab-generator[3152]: Ignoring "noauto" option for root device
	[  +2.706873] systemd-fstab-generator[3612]: Ignoring "noauto" option for root device
	[  +0.089522] kauditd_printk_skb: 238 callbacks suppressed
	[  +5.106943] kauditd_printk_skb: 48 callbacks suppressed
	[ +11.168354] systemd-fstab-generator[4078]: Ignoring "noauto" option for root device
	
	
	==> etcd [1325a769984c7c2b1abd652d4fff1402a1f2fbae781977b7497a086f67193d89] <==
	{"level":"info","ts":"2025-04-14T14:01:23.385548Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-14T14:01:23.453323Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","commit-index":430}
	{"level":"info","ts":"2025-04-14T14:01:23.453554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-14T14:01:23.453714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became follower at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:23.453811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a1d4c90ecc3171ce [peers: [], term: 2, commit: 430, applied: 0, lastindex: 430, lastterm: 2]"}
	{"level":"warn","ts":"2025-04-14T14:01:23.468132Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-14T14:01:23.521294Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":402}
	{"level":"info","ts":"2025-04-14T14:01:23.531062Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-14T14:01:23.538627Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a1d4c90ecc3171ce","timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:01:23.539344Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a1d4c90ecc3171ce"}
	{"level":"info","ts":"2025-04-14T14:01:23.539525Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"a1d4c90ecc3171ce","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T14:01:23.540710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:23.541113Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T14:01:23.541320Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541468Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T14:01:23.541875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce switched to configuration voters=(11661166400561574350)"}
	{"level":"info","ts":"2025-04-14T14:01:23.541970Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","added-peer-id":"a1d4c90ecc3171ce","added-peer-peer-urls":["https://192.168.61.188:2380"]}
	{"level":"info","ts":"2025-04-14T14:01:23.542064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:23.542100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:23.544424Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:01:23.556174Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:23.556209Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:23.565469Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a1d4c90ecc3171ce","initial-advertise-peer-urls":["https://192.168.61.188:2380"],"listen-peer-urls":["https://192.168.61.188:2380"],"advertise-client-urls":["https://192.168.61.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:01:23.565529Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [980fca4f660be3b3111c11dd5777d63f82e32a8c0fb3a14362e65ab341324c10] <==
	{"level":"info","ts":"2025-04-14T14:01:28.897443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","added-peer-id":"a1d4c90ecc3171ce","added-peer-peer-urls":["https://192.168.61.188:2380"]}
	{"level":"info","ts":"2025-04-14T14:01:28.897537Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5178d02fe96ee090","local-member-id":"a1d4c90ecc3171ce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:28.897577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:01:28.897980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:28.902403Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:01:28.904076Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a1d4c90ecc3171ce","initial-advertise-peer-urls":["https://192.168.61.188:2380"],"listen-peer-urls":["https://192.168.61.188:2380"],"advertise-client-urls":["https://192.168.61.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:01:28.904137Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:01:28.904218Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:28.904240Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.188:2380"}
	{"level":"info","ts":"2025-04-14T14:01:30.161123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce received MsgPreVoteResp from a1d4c90ecc3171ce at term 2"}
	{"level":"info","ts":"2025-04-14T14:01:30.161309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce received MsgVoteResp from a1d4c90ecc3171ce at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4c90ecc3171ce became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.161366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4c90ecc3171ce elected leader a1d4c90ecc3171ce at term 3"}
	{"level":"info","ts":"2025-04-14T14:01:30.165963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:01:30.166456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:01:30.165967Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a1d4c90ecc3171ce","local-member-attributes":"{Name:pause-648153 ClientURLs:[https://192.168.61.188:2379]}","request-path":"/0/members/a1d4c90ecc3171ce/attributes","cluster-id":"5178d02fe96ee090","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:01:30.166969Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:01:30.167031Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:01:30.167418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:30.167596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:01:30.168617Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.188:2379"}
	{"level":"info","ts":"2025-04-14T14:01:30.168669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:01:52 up 2 min,  0 users,  load average: 1.75, 0.61, 0.22
	Linux pause-648153 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b27895b4bdecd768e0c4c8d5cba45bf39ccd5f1c11f15276a18a968abdb256b] <==
	I0414 14:01:31.666983       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:01:31.679099       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:01:31.679192       1 policy_source.go:240] refreshing policies
	I0414 14:01:31.679256       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:01:31.679280       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:01:31.679315       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:01:31.679339       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:01:31.679358       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:01:31.679373       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:01:31.681955       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:01:31.682447       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:01:31.692987       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:01:31.697622       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:01:31.704033       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:01:31.737912       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:01:31.740499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:01:31.751282       1 shared_informer.go:320] Caches are synced for configmaps
	E0414 14:01:31.793577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 14:01:32.461665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:01:33.064861       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:01:33.111040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:01:33.150260       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:01:33.157331       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:01:34.866403       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:01:35.165487       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cce231159b9655f6c50bff76f24391c112dda60887b48247d4818ede864b7678] <==
	W0414 14:01:23.467591       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0414 14:01:23.473930       1 options.go:238] external host was not specified, using 192.168.61.188
	I0414 14:01:23.481303       1 server.go:143] Version: v1.32.2
	I0414 14:01:23.482128       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [a72f3a7c9af9bdfa1c23e6c10652eaecd817b2698b70f563618cb8633540811d] <==
	
	
	==> kube-controller-manager [eba7dd29999fa053a0a8f5c89462c2cb0656d52f4b2170b4b1d0daa3957e9df4] <==
	I0414 14:01:34.876259       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 14:01:34.876383       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-648153"
	I0414 14:01:34.880151       1 shared_informer.go:320] Caches are synced for deployment
	I0414 14:01:34.882597       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0414 14:01:34.888038       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 14:01:34.891517       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:01:34.899860       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0414 14:01:34.907942       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0414 14:01:34.908004       1 shared_informer.go:320] Caches are synced for disruption
	I0414 14:01:34.908140       1 shared_informer.go:320] Caches are synced for HPA
	I0414 14:01:34.909334       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0414 14:01:34.910665       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0414 14:01:34.910836       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0414 14:01:34.910868       1 shared_informer.go:320] Caches are synced for taint
	I0414 14:01:34.910935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0414 14:01:34.910981       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0414 14:01:34.911074       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-648153"
	I0414 14:01:34.911162       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0414 14:01:34.911270       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0414 14:01:34.912957       1 shared_informer.go:320] Caches are synced for ephemeral
	I0414 14:01:34.916164       1 shared_informer.go:320] Caches are synced for TTL
	I0414 14:01:34.919574       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 14:01:34.919611       1 shared_informer.go:320] Caches are synced for job
	I0414 14:01:34.922208       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 14:01:34.925691       1 shared_informer.go:320] Caches are synced for attach detach
	
	
	==> kube-proxy [53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0] <==
	
	
	==> kube-proxy [8754ae5c4800f6831836ae31a48b6cf1813b9c346d066760990bb16525a55834] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:01:32.386618       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:01:32.397977       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.188"]
	E0414 14:01:32.398092       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:01:32.437305       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:01:32.437353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:01:32.437377       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:01:32.440899       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:01:32.441365       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:01:32.441723       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:01:32.444350       1 config.go:199] "Starting service config controller"
	I0414 14:01:32.444408       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:01:32.444436       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:01:32.444440       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:01:32.448022       1 config.go:329] "Starting node config controller"
	I0414 14:01:32.448097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:01:32.544801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:01:32.544806       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:01:32.548614       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [49a768cc6261e969cdf48541d3c9173bf7169640aef0ffc6dacfdadf3990e58d] <==
	
	
	==> kube-scheduler [66a72cc76bdc27faa8def51dc546906bd4a19ef9c95c40cdcda380afbe1200d4] <==
	I0414 14:01:29.662323       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:01:31.644020       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:01:31.644109       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:01:31.644133       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:01:31.644152       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:01:31.668050       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:01:31.668188       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:01:31.670532       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:01:31.670637       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:01:31.671303       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:01:31.670650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:01:31.772523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:01:30 pause-648153 kubelet[3619]: E0414 14:01:30.960981    3619 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-648153\" not found" node="pause-648153"
	Apr 14 14:01:30 pause-648153 kubelet[3619]: E0414 14:01:30.962955    3619 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-648153\" not found" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.623834    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.699767    3619 apiserver.go:52] "Watching apiserver"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716602    3619 kubelet_node_status.go:125] "Node was previously registered" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716790    3619 kubelet_node_status.go:79] "Successfully registered node" node="pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.716814    3619 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.718192    3619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.726019    3619 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.726908    3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8400d3e1-b5ba-49a2-b916-fe8d6188fd6a-xtables-lock\") pod \"kube-proxy-25n6s\" (UID: \"8400d3e1-b5ba-49a2-b916-fe8d6188fd6a\") " pod="kube-system/kube-proxy-25n6s"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.727007    3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8400d3e1-b5ba-49a2-b916-fe8d6188fd6a-lib-modules\") pod \"kube-proxy-25n6s\" (UID: \"8400d3e1-b5ba-49a2-b916-fe8d6188fd6a\") " pod="kube-system/kube-proxy-25n6s"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.787881    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-648153\" already exists" pod="kube-system/etcd-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.788049    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.802343    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-648153\" already exists" pod="kube-system/kube-apiserver-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.802464    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.825988    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-648153\" already exists" pod="kube-system/kube-controller-manager-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: I0414 14:01:31.826197    3619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-648153"
	Apr 14 14:01:31 pause-648153 kubelet[3619]: E0414 14:01:31.846486    3619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-648153\" already exists" pod="kube-system/kube-scheduler-pause-648153"
	Apr 14 14:01:32 pause-648153 kubelet[3619]: I0414 14:01:32.008379    3619 scope.go:117] "RemoveContainer" containerID="2a7e70afba867cc238cf48ae24ccb9872aac09f0a86076a8e8dd9b23be3e32e3"
	Apr 14 14:01:32 pause-648153 kubelet[3619]: I0414 14:01:32.009571    3619 scope.go:117] "RemoveContainer" containerID="53812db1656a8544a0abe67ec5d3d9e5ff5f9ac7329e534d6639bad53d98eff0"
	Apr 14 14:01:34 pause-648153 kubelet[3619]: I0414 14:01:34.156510    3619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 14 14:01:37 pause-648153 kubelet[3619]: E0414 14:01:37.875719    3619 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639297875401737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:37 pause-648153 kubelet[3619]: E0414 14:01:37.875792    3619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639297875401737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:47 pause-648153 kubelet[3619]: E0414 14:01:47.878518    3619 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639307878235157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:01:47 pause-648153 kubelet[3619]: E0414 14:01:47.878549    3619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639307878235157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-648153 -n pause-648153
helpers_test.go:261: (dbg) Run:  kubectl --context pause-648153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (307.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m7.047643421s)

                                                
                                                
-- stdout --
	* [old-k8s-version-954411] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-954411" primary control-plane node in "old-k8s-version-954411" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:01:06.919681 2231425 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:01:06.919776 2231425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:06.919781 2231425 out.go:358] Setting ErrFile to fd 2...
	I0414 14:01:06.919785 2231425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:06.919986 2231425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:01:06.920601 2231425 out.go:352] Setting JSON to false
	I0414 14:01:06.921663 2231425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168206,"bootTime":1744471061,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:01:06.921766 2231425 start.go:139] virtualization: kvm guest
	I0414 14:01:06.923788 2231425 out.go:177] * [old-k8s-version-954411] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:01:06.925039 2231425 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:01:06.925039 2231425 notify.go:220] Checking for updates...
	I0414 14:01:06.927258 2231425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:01:06.928385 2231425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:01:06.929632 2231425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:06.930732 2231425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:01:06.931920 2231425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:01:06.933590 2231425 config.go:182] Loaded profile config "force-systemd-flag-509258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:06.933680 2231425 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:06.933788 2231425 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:01:06.933880 2231425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:01:06.967994 2231425 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:01:06.969213 2231425 start.go:297] selected driver: kvm2
	I0414 14:01:06.969240 2231425 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:01:06.969257 2231425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:01:06.969946 2231425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:06.970043 2231425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:01:06.985778 2231425 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:01:06.985827 2231425 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:01:06.986086 2231425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:01:06.986129 2231425 cni.go:84] Creating CNI manager for ""
	I0414 14:01:06.986171 2231425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:01:06.986180 2231425 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:01:06.986226 2231425 start.go:340] cluster config:
	{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:01:06.986308 2231425 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:01:06.987931 2231425 out.go:177] * Starting "old-k8s-version-954411" primary control-plane node in "old-k8s-version-954411" cluster
	I0414 14:01:06.988888 2231425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:01:06.988939 2231425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 14:01:06.988959 2231425 cache.go:56] Caching tarball of preloaded images
	I0414 14:01:06.989051 2231425 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:01:06.989068 2231425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 14:01:06.989172 2231425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json ...
	I0414 14:01:06.989191 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json: {Name:mkf395d3ac6ac7a536f2c134a728f8da0d0418cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:01:06.989380 2231425 start.go:360] acquireMachinesLock for old-k8s-version-954411: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:01:42.385450 2231425 start.go:364] duration metric: took 35.396017033s to acquireMachinesLock for "old-k8s-version-954411"
	I0414 14:01:42.385550 2231425 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:01:42.385687 2231425 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:01:42.387303 2231425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:01:42.387570 2231425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:01:42.387643 2231425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:42.408560 2231425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0414 14:01:42.409136 2231425 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:42.409723 2231425 main.go:141] libmachine: Using API Version  1
	I0414 14:01:42.409750 2231425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:42.410166 2231425 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:42.410355 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:01:42.410521 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:01:42.410672 2231425 start.go:159] libmachine.API.Create for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:01:42.410709 2231425 client.go:168] LocalClient.Create starting
	I0414 14:01:42.410743 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:01:42.410794 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410813 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410892 2231425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:01:42.410921 2231425 main.go:141] libmachine: Decoding PEM data...
	I0414 14:01:42.410939 2231425 main.go:141] libmachine: Parsing certificate...
	I0414 14:01:42.410963 2231425 main.go:141] libmachine: Running pre-create checks...
	I0414 14:01:42.410974 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .PreCreateCheck
	I0414 14:01:42.411361 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:01:42.411765 2231425 main.go:141] libmachine: Creating machine...
	I0414 14:01:42.411782 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .Create
	I0414 14:01:42.411958 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating KVM machine...
	I0414 14:01:42.411979 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating network...
	I0414 14:01:42.413183 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found existing default KVM network
	I0414 14:01:42.414239 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.414087 2231858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201190}
	I0414 14:01:42.414263 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | created network xml: 
	I0414 14:01:42.414282 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | <network>
	I0414 14:01:42.414295 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <name>mk-old-k8s-version-954411</name>
	I0414 14:01:42.414309 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <dns enable='no'/>
	I0414 14:01:42.414320 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414333 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:01:42.414344 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     <dhcp>
	I0414 14:01:42.414353 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:01:42.414365 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |     </dhcp>
	I0414 14:01:42.414373 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   </ip>
	I0414 14:01:42.414386 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG |   
	I0414 14:01:42.414395 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | </network>
	I0414 14:01:42.414404 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | 
	I0414 14:01:42.419672 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | trying to create private KVM network mk-old-k8s-version-954411 192.168.39.0/24...
	I0414 14:01:42.495567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | private KVM network mk-old-k8s-version-954411 192.168.39.0/24 created
	I0414 14:01:42.495599 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.495526 2231858 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.495614 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.495631 2231425 main.go:141] libmachine: (old-k8s-version-954411) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:01:42.495727 2231425 main.go:141] libmachine: (old-k8s-version-954411) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:01:42.779984 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.779839 2231858 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa...
	I0414 14:01:42.941486 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941322 2231858 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk...
	I0414 14:01:42.941548 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing magic tar header
	I0414 14:01:42.941570 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Writing SSH key tar header
	I0414 14:01:42.941589 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:42.941479 2231858 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 ...
	I0414 14:01:42.941603 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411
	I0414 14:01:42.941624 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411 (perms=drwx------)
	I0414 14:01:42.941642 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:01:42.941652 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:01:42.941790 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:01:42.941856 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:01:42.941870 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:01:42.941895 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:01:42.941910 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home/jenkins
	I0414 14:01:42.941929 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | checking permissions on dir: /home
	I0414 14:01:42.941942 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | skipping /home - not owner
	I0414 14:01:42.941978 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:01:42.942005 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:01:42.942019 2231425 main.go:141] libmachine: (old-k8s-version-954411) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:01:42.942030 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:42.943234 2231425 main.go:141] libmachine: (old-k8s-version-954411) define libvirt domain using xml: 
	I0414 14:01:42.943260 2231425 main.go:141] libmachine: (old-k8s-version-954411) <domain type='kvm'>
	I0414 14:01:42.943295 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <name>old-k8s-version-954411</name>
	I0414 14:01:42.943319 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <memory unit='MiB'>2200</memory>
	I0414 14:01:42.943331 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <vcpu>2</vcpu>
	I0414 14:01:42.943342 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <features>
	I0414 14:01:42.943353 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <acpi/>
	I0414 14:01:42.943364 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <apic/>
	I0414 14:01:42.943378 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <pae/>
	I0414 14:01:42.943393 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943402 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </features>
	I0414 14:01:42.943413 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <cpu mode='host-passthrough'>
	I0414 14:01:42.943425 2231425 main.go:141] libmachine: (old-k8s-version-954411)   
	I0414 14:01:42.943433 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </cpu>
	I0414 14:01:42.943442 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <os>
	I0414 14:01:42.943453 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <type>hvm</type>
	I0414 14:01:42.943476 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='cdrom'/>
	I0414 14:01:42.943496 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <boot dev='hd'/>
	I0414 14:01:42.943525 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <bootmenu enable='no'/>
	I0414 14:01:42.943535 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </os>
	I0414 14:01:42.943544 2231425 main.go:141] libmachine: (old-k8s-version-954411)   <devices>
	I0414 14:01:42.943556 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='cdrom'>
	I0414 14:01:42.943587 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/boot2docker.iso'/>
	I0414 14:01:42.943601 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hdc' bus='scsi'/>
	I0414 14:01:42.943607 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <readonly/>
	I0414 14:01:42.943615 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943624 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <disk type='file' device='disk'>
	I0414 14:01:42.943644 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:01:42.943664 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/old-k8s-version-954411.rawdisk'/>
	I0414 14:01:42.943677 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target dev='hda' bus='virtio'/>
	I0414 14:01:42.943688 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </disk>
	I0414 14:01:42.943699 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943710 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='mk-old-k8s-version-954411'/>
	I0414 14:01:42.943722 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943735 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943747 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <interface type='network'>
	I0414 14:01:42.943757 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <source network='default'/>
	I0414 14:01:42.943765 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <model type='virtio'/>
	I0414 14:01:42.943775 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </interface>
	I0414 14:01:42.943784 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <serial type='pty'>
	I0414 14:01:42.943794 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target port='0'/>
	I0414 14:01:42.943802 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </serial>
	I0414 14:01:42.943812 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <console type='pty'>
	I0414 14:01:42.943821 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <target type='serial' port='0'/>
	I0414 14:01:42.943835 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </console>
	I0414 14:01:42.943847 2231425 main.go:141] libmachine: (old-k8s-version-954411)     <rng model='virtio'>
	I0414 14:01:42.943858 2231425 main.go:141] libmachine: (old-k8s-version-954411)       <backend model='random'>/dev/random</backend>
	I0414 14:01:42.943868 2231425 main.go:141] libmachine: (old-k8s-version-954411)     </rng>
	I0414 14:01:42.943877 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943885 2231425 main.go:141] libmachine: (old-k8s-version-954411)     
	I0414 14:01:42.943894 2231425 main.go:141] libmachine: (old-k8s-version-954411)   </devices>
	I0414 14:01:42.943902 2231425 main.go:141] libmachine: (old-k8s-version-954411) </domain>
	I0414 14:01:42.943915 2231425 main.go:141] libmachine: (old-k8s-version-954411) 
	I0414 14:01:42.947328 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:c0:7b:40 in network default
	I0414 14:01:42.948005 2231425 main.go:141] libmachine: (old-k8s-version-954411) starting domain...
	I0414 14:01:42.948024 2231425 main.go:141] libmachine: (old-k8s-version-954411) ensuring networks are active...
	I0414 14:01:42.948036 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:42.948755 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network default is active
	I0414 14:01:42.949156 2231425 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network mk-old-k8s-version-954411 is active
	I0414 14:01:42.949711 2231425 main.go:141] libmachine: (old-k8s-version-954411) getting domain XML...
	I0414 14:01:42.950550 2231425 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:01:44.322603 2231425 main.go:141] libmachine: (old-k8s-version-954411) waiting for IP...
	I0414 14:01:44.323750 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.324363 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.324410 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.324348 2231858 retry.go:31] will retry after 279.076334ms: waiting for domain to come up
	I0414 14:01:44.605212 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.605923 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.605954 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.605856 2231858 retry.go:31] will retry after 254.872686ms: waiting for domain to come up
	I0414 14:01:44.862616 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:44.863190 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:44.863226 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:44.863176 2231858 retry.go:31] will retry after 298.853913ms: waiting for domain to come up
	I0414 14:01:45.164114 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.164912 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.164985 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.164894 2231858 retry.go:31] will retry after 536.754794ms: waiting for domain to come up
	I0414 14:01:45.703716 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:45.704247 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:45.704275 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:45.704222 2231858 retry.go:31] will retry after 518.01594ms: waiting for domain to come up
	I0414 14:01:46.224061 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:46.224567 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:46.224597 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:46.224521 2231858 retry.go:31] will retry after 811.819388ms: waiting for domain to come up
	I0414 14:01:47.037624 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:47.038212 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:47.038246 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:47.038178 2231858 retry.go:31] will retry after 810.475581ms: waiting for domain to come up
	I0414 14:01:47.850153 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:47.850836 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:47.850903 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:47.850807 2231858 retry.go:31] will retry after 1.485492391s: waiting for domain to come up
	I0414 14:01:49.338172 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:49.338706 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:49.338734 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:49.338676 2231858 retry.go:31] will retry after 1.687724196s: waiting for domain to come up
	I0414 14:01:51.027867 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:51.028424 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:51.028464 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:51.028392 2231858 retry.go:31] will retry after 1.522926636s: waiting for domain to come up
	I0414 14:01:52.553464 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:52.554194 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:52.554224 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:52.554173 2231858 retry.go:31] will retry after 2.433606804s: waiting for domain to come up
	I0414 14:01:54.989985 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:54.990668 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:54.990699 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:54.990618 2231858 retry.go:31] will retry after 2.83892329s: waiting for domain to come up
	I0414 14:01:57.831294 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:01:57.831749 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:01:57.831776 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:01:57.831713 2231858 retry.go:31] will retry after 3.144381263s: waiting for domain to come up
	I0414 14:02:00.978061 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:00.978497 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:02:00.978544 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:02:00.978471 2231858 retry.go:31] will retry after 4.522230468s: waiting for domain to come up
	I0414 14:02:05.503464 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.504013 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has current primary IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.504042 2231425 main.go:141] libmachine: (old-k8s-version-954411) found domain IP: 192.168.39.90
	I0414 14:02:05.504053 2231425 main.go:141] libmachine: (old-k8s-version-954411) reserving static IP address...
	I0414 14:02:05.504350 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-954411", mac: "52:54:00:e4:99:d7", ip: "192.168.39.90"} in network mk-old-k8s-version-954411
	I0414 14:02:05.589888 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Getting to WaitForSSH function...
	I0414 14:02:05.589928 2231425 main.go:141] libmachine: (old-k8s-version-954411) reserved static IP address 192.168.39.90 for domain old-k8s-version-954411
	I0414 14:02:05.589942 2231425 main.go:141] libmachine: (old-k8s-version-954411) waiting for SSH...
	I0414 14:02:05.593022 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.593446 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.593475 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.593616 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH client type: external
	I0414 14:02:05.593648 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa (-rw-------)
	I0414 14:02:05.593692 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:02:05.593708 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | About to run SSH command:
	I0414 14:02:05.593719 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | exit 0
	I0414 14:02:05.721121 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | SSH cmd err, output: <nil>: 
	I0414 14:02:05.721417 2231425 main.go:141] libmachine: (old-k8s-version-954411) KVM machine creation complete
	I0414 14:02:05.721714 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:02:05.722325 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:05.722519 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:05.722666 2231425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:02:05.722679 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetState
	I0414 14:02:05.723891 2231425 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:02:05.723906 2231425 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:02:05.723913 2231425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:02:05.723921 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.726302 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.726658 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.726689 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.726828 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.727025 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.727169 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.727287 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.727485 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.727810 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.727823 2231425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:02:05.836170 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:05.836199 2231425 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:02:05.836207 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.839446 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.839864 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.839893 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.840067 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.840266 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.840418 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.840577 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.840722 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.840967 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.840979 2231425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:02:05.949867 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:02:05.949956 2231425 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:02:05.949969 2231425 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:02:05.949980 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:05.950250 2231425 buildroot.go:166] provisioning hostname "old-k8s-version-954411"
	I0414 14:02:05.950288 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:05.950465 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:05.953094 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.953502 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:05.953539 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:05.953661 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:05.953876 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.954036 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:05.954254 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:05.954445 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:05.954805 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:05.954835 2231425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-954411 && echo "old-k8s-version-954411" | sudo tee /etc/hostname
	I0414 14:02:06.076251 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-954411
	
	I0414 14:02:06.076292 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.080415 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.080847 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.080896 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.081131 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.081336 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.081508 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.081666 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.081868 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.082163 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.082187 2231425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-954411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-954411/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-954411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:02:06.198986 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:02:06.199059 2231425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:02:06.199095 2231425 buildroot.go:174] setting up certificates
	I0414 14:02:06.199119 2231425 provision.go:84] configureAuth start
	I0414 14:02:06.199137 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:02:06.199506 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:06.202609 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.203013 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.203051 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.203181 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.205535 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.205856 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.205897 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.206020 2231425 provision.go:143] copyHostCerts
	I0414 14:02:06.206100 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:02:06.206125 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:02:06.206204 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:02:06.206322 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:02:06.206333 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:02:06.206366 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:02:06.206441 2231425 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:02:06.206451 2231425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:02:06.206479 2231425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:02:06.206546 2231425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-954411 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-954411]
	I0414 14:02:06.366218 2231425 provision.go:177] copyRemoteCerts
	I0414 14:02:06.366301 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:02:06.366340 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.369647 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.370019 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.370054 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.370246 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.370475 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.370666 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.370826 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:06.455796 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:02:06.482620 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 14:02:06.507961 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:02:06.534083 2231425 provision.go:87] duration metric: took 334.941964ms to configureAuth
	I0414 14:02:06.534127 2231425 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:02:06.534326 2231425 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:02:06.534404 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.537117 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.537459 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.537515 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.537675 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.537900 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.538105 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.538265 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.538432 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.538667 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.538683 2231425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:02:06.764766 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:02:06.764796 2231425 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:02:06.764805 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetURL
	I0414 14:02:06.766384 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | using libvirt version 6000000
	I0414 14:02:06.768517 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.768966 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.769028 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.769171 2231425 main.go:141] libmachine: Docker is up and running!
	I0414 14:02:06.769190 2231425 main.go:141] libmachine: Reticulating splines...
	I0414 14:02:06.769199 2231425 client.go:171] duration metric: took 24.35847885s to LocalClient.Create
	I0414 14:02:06.769221 2231425 start.go:167] duration metric: took 24.358553067s to libmachine.API.Create "old-k8s-version-954411"
	I0414 14:02:06.769228 2231425 start.go:293] postStartSetup for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:02:06.769237 2231425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:02:06.769255 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:06.769520 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:02:06.769548 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.772067 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.772471 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.772503 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.772718 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.772928 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.773093 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.773260 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:06.855695 2231425 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:02:06.860172 2231425 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:02:06.860224 2231425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:02:06.860312 2231425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:02:06.860410 2231425 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:02:06.860522 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:02:06.870132 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:06.893118 2231425 start.go:296] duration metric: took 123.876511ms for postStartSetup
	I0414 14:02:06.893177 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:02:06.893787 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:06.896471 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.896752 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.896789 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.897083 2231425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json ...
	I0414 14:02:06.897288 2231425 start.go:128] duration metric: took 24.51158765s to createHost
	I0414 14:02:06.897315 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:06.899533 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.899839 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:06.899879 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:06.899988 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:06.900155 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.900310 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:06.900421 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:06.900559 2231425 main.go:141] libmachine: Using SSH client type: native
	I0414 14:02:06.900832 2231425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:02:06.900844 2231425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:02:07.010129 2231425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639326.991378871
	
	I0414 14:02:07.010176 2231425 fix.go:216] guest clock: 1744639326.991378871
	I0414 14:02:07.010188 2231425 fix.go:229] Guest: 2025-04-14 14:02:06.991378871 +0000 UTC Remote: 2025-04-14 14:02:06.897300925 +0000 UTC m=+60.018632384 (delta=94.077946ms)
	I0414 14:02:07.010242 2231425 fix.go:200] guest clock delta is within tolerance: 94.077946ms
	I0414 14:02:07.010253 2231425 start.go:83] releasing machines lock for "old-k8s-version-954411", held for 24.624766435s
	I0414 14:02:07.010296 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.010630 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:07.013833 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.014282 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.014303 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.014571 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015116 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015328 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:02:07.015453 2231425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:02:07.015500 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:07.015600 2231425 ssh_runner.go:195] Run: cat /version.json
	I0414 14:02:07.015631 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:02:07.018330 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018676 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.018706 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018725 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.018814 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:07.018995 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:07.019178 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:07.019193 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:07.019222 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:07.019346 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:07.019391 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:02:07.019510 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:02:07.019640 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:02:07.019783 2231425 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:02:07.098922 2231425 ssh_runner.go:195] Run: systemctl --version
	I0414 14:02:07.127540 2231425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:02:07.297830 2231425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:02:07.305518 2231425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:02:07.305596 2231425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:02:07.322772 2231425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:02:07.322805 2231425 start.go:495] detecting cgroup driver to use...
	I0414 14:02:07.322887 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:02:07.338886 2231425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:02:07.354169 2231425 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:02:07.354246 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:02:07.370147 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:02:07.386422 2231425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:02:07.503201 2231425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:02:07.642194 2231425 docker.go:233] disabling docker service ...
	I0414 14:02:07.642266 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:02:07.657618 2231425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:02:07.671661 2231425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:02:07.805253 2231425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:02:07.936806 2231425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:02:07.955665 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:02:07.977834 2231425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 14:02:07.977898 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:07.990144 2231425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:02:07.990219 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.001051 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.011866 2231425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:02:08.022831 2231425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:02:08.034294 2231425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:02:08.044252 2231425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:02:08.044309 2231425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:02:08.057115 2231425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:02:08.067172 2231425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:08.181004 2231425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:02:08.287832 2231425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:02:08.287922 2231425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:02:08.293140 2231425 start.go:563] Will wait 60s for crictl version
	I0414 14:02:08.293201 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:08.297185 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:02:08.350602 2231425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:02:08.350693 2231425 ssh_runner.go:195] Run: crio --version
	I0414 14:02:08.380823 2231425 ssh_runner.go:195] Run: crio --version
	I0414 14:02:08.415360 2231425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 14:02:08.416524 2231425 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:02:08.419481 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:08.419961 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:01:58 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:02:08.419993 2231425 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:02:08.420212 2231425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:02:08.424377 2231425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:08.437464 2231425 kubeadm.go:883] updating cluster {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:02:08.437589 2231425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:02:08.437646 2231425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:02:08.473351 2231425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:02:08.473422 2231425 ssh_runner.go:195] Run: which lz4
	I0414 14:02:08.478843 2231425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:02:08.483912 2231425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:02:08.483956 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 14:02:10.254792 2231425 crio.go:462] duration metric: took 1.775988843s to copy over tarball
	I0414 14:02:10.254915 2231425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:02:12.899564 2231425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.644617725s)
	I0414 14:02:12.899592 2231425 crio.go:469] duration metric: took 2.644759434s to extract the tarball
	I0414 14:02:12.899600 2231425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:02:12.945037 2231425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:02:12.991509 2231425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:02:12.991550 2231425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 14:02:12.991629 2231425 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:12.991677 2231425 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:12.991715 2231425 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 14:02:12.991691 2231425 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:12.991713 2231425 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:12.991656 2231425 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:12.991748 2231425 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:12.991744 2231425 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:12.993375 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:12.993406 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:12.993492 2231425 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:12.993701 2231425 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:12.993717 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:12.993744 2231425 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 14:02:12.993776 2231425 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:12.993937 2231425 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.132156 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 14:02:13.138207 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.154674 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.198762 2231425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 14:02:13.198816 2231425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 14:02:13.198868 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.205801 2231425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 14:02:13.205852 2231425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.205899 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.229819 2231425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 14:02:13.229862 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.229892 2231425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.229935 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.229952 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.278498 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.278527 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.278498 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.357903 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:02:13.357949 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:02:13.357949 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.427839 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 14:02:13.427875 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 14:02:13.427952 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:02:13.463208 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 14:02:13.664017 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.674692 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.676906 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.679772 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.752348 2231425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 14:02:13.752407 2231425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.752462 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.760839 2231425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 14:02:13.760888 2231425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.760919 2231425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 14:02:13.760956 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.760959 2231425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.761001 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.791878 2231425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 14:02:13.791930 2231425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.791964 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.792008 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.791968 2231425 ssh_runner.go:195] Run: which crictl
	I0414 14:02:13.792059 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.834942 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.878301 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.878390 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:13.878390 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.907901 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:02:13.958787 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:13.958832 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:02:13.979006 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:02:14.014817 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 14:02:14.044137 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 14:02:14.044240 2231425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:02:14.063857 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 14:02:14.096492 2231425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 14:02:15.808205 2231425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:02:15.950831 2231425 cache_images.go:92] duration metric: took 2.959258613s to LoadCachedImages
	W0414 14:02:15.950962 2231425 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0414 14:02:15.950985 2231425 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I0414 14:02:15.951123 2231425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-954411 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:02:15.951224 2231425 ssh_runner.go:195] Run: crio config
	I0414 14:02:16.007159 2231425 cni.go:84] Creating CNI manager for ""
	I0414 14:02:16.007208 2231425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:02:16.007223 2231425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:02:16.007244 2231425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-954411 NodeName:old-k8s-version-954411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 14:02:16.007393 2231425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-954411"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:02:16.007460 2231425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 14:02:16.017677 2231425 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:02:16.017751 2231425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:02:16.027626 2231425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 14:02:16.048186 2231425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:02:16.068640 2231425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 14:02:16.086038 2231425 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0414 14:02:16.090037 2231425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:02:16.102492 2231425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:02:16.236159 2231425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:02:16.254893 2231425 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411 for IP: 192.168.39.90
	I0414 14:02:16.254923 2231425 certs.go:194] generating shared ca certs ...
	I0414 14:02:16.254952 2231425 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.255131 2231425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:02:16.255183 2231425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:02:16.255193 2231425 certs.go:256] generating profile certs ...
	I0414 14:02:16.255263 2231425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key
	I0414 14:02:16.255294 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt with IP's: []
	I0414 14:02:16.523333 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt ...
	I0414 14:02:16.523372 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.crt: {Name:mke05337cb5defe1d267510b184d8dbaeb2d14c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.523595 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key ...
	I0414 14:02:16.523621 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key: {Name:mk998705f35b2c4f125c6e5ac873c777cbc71e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.523728 2231425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633
	I0414 14:02:16.523745 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.90]
	I0414 14:02:16.652576 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 ...
	I0414 14:02:16.652626 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633: {Name:mk7178d1a073b554cc9d69147a63b0fe7a2e9681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.652875 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633 ...
	I0414 14:02:16.652902 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633: {Name:mk6cb6bb20bed971a3c219e5265c60c0db095156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.653031 2231425 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt.798e3633 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt
	I0414 14:02:16.653138 2231425 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key
	I0414 14:02:16.653238 2231425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key
	I0414 14:02:16.653263 2231425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt with IP's: []
	I0414 14:02:16.805416 2231425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt ...
	I0414 14:02:16.805448 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt: {Name:mk8720e05c4bd25339fa6d45e4047afa245318bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.805621 2231425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key ...
	I0414 14:02:16.805639 2231425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key: {Name:mkf5b296e3e27886356c877eef73fac5d7e589c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:02:16.805806 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:02:16.805842 2231425 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:02:16.805853 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:02:16.805873 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:02:16.805894 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:02:16.805916 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:02:16.805952 2231425 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:02:16.806528 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:02:16.833208 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:02:16.857246 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:02:16.885790 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:02:16.914739 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:02:16.944834 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:02:16.971788 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:02:17.000687 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:02:17.025229 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:02:17.049962 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:02:17.082303 2231425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:02:17.119486 2231425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:02:17.137826 2231425 ssh_runner.go:195] Run: openssl version
	I0414 14:02:17.146664 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:02:17.162054 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.166832 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.166917 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:02:17.173395 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:02:17.193067 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:02:17.205436 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.210255 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.210343 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:02:17.216410 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:02:17.227772 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:02:17.239128 2231425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.244015 2231425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.244114 2231425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:02:17.250024 2231425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:02:17.260827 2231425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:02:17.265133 2231425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:02:17.265204 2231425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:02:17.265336 2231425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:02:17.265421 2231425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:02:17.314385 2231425 cri.go:89] found id: ""
	I0414 14:02:17.314476 2231425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:02:17.325551 2231425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:02:17.336373 2231425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:02:17.346695 2231425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:02:17.346733 2231425 kubeadm.go:157] found existing configuration files:
	
	I0414 14:02:17.346782 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:02:17.356406 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:02:17.356497 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:02:17.366580 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:02:17.379831 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:02:17.379909 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:02:17.393525 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:02:17.405892 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:02:17.405965 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:02:17.418473 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:02:17.428225 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:02:17.428299 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:02:17.438056 2231425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:02:17.568005 2231425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:02:17.568155 2231425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:02:17.732274 2231425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:02:17.732474 2231425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:02:17.732633 2231425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:02:17.928163 2231425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:02:17.929957 2231425 out.go:235]   - Generating certificates and keys ...
	I0414 14:02:17.930090 2231425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:02:17.930223 2231425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:02:18.165073 2231425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:02:18.550030 2231425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:02:18.819652 2231425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:02:19.419932 2231425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:02:19.494572 2231425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:02:19.494786 2231425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	I0414 14:02:19.569868 2231425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:02:19.570107 2231425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	I0414 14:02:19.718840 2231425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:02:19.849363 2231425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:02:20.116069 2231425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:02:20.116359 2231425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:02:20.319491 2231425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:02:20.521436 2231425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:02:20.651559 2231425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:02:20.783494 2231425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:02:20.802277 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:02:20.803259 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:02:20.803313 2231425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:02:20.934146 2231425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:02:20.936033 2231425 out.go:235]   - Booting up control plane ...
	I0414 14:02:20.936150 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:02:20.943781 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:02:20.946201 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:02:20.947125 2231425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:02:20.952024 2231425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:03:00.950657 2231425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:03:00.950813 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:00.951057 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:05.951793 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:05.952023 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:15.952626 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:15.952944 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:03:35.953842 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:03:35.954069 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:04:15.954576 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:04:15.954840 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:04:15.954853 2231425 kubeadm.go:310] 
	I0414 14:04:15.954899 2231425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:04:15.954953 2231425 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:04:15.954961 2231425 kubeadm.go:310] 
	I0414 14:04:15.955016 2231425 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:04:15.955075 2231425 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:04:15.955240 2231425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:04:15.955250 2231425 kubeadm.go:310] 
	I0414 14:04:15.955403 2231425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:04:15.955446 2231425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:04:15.955524 2231425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:04:15.955552 2231425 kubeadm.go:310] 
	I0414 14:04:15.955717 2231425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:04:15.955814 2231425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:04:15.955826 2231425 kubeadm.go:310] 
	I0414 14:04:15.955950 2231425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:04:15.956100 2231425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:04:15.956221 2231425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:04:15.956323 2231425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:04:15.956337 2231425 kubeadm.go:310] 
	I0414 14:04:15.956932 2231425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:04:15.957062 2231425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:04:15.957170 2231425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 14:04:15.957385 2231425 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-954411] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 14:04:15.957439 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 14:04:16.841525 2231425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:04:16.856665 2231425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:04:16.867156 2231425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:04:16.867187 2231425 kubeadm.go:157] found existing configuration files:
	
	I0414 14:04:16.867236 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:04:16.877168 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:04:16.877230 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:04:16.888208 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:04:16.897304 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:04:16.897366 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:04:16.906689 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:04:16.915643 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:04:16.915746 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:04:16.925635 2231425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:04:16.934817 2231425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:04:16.934881 2231425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:04:16.944174 2231425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:04:17.175618 2231425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:06:13.248116 2231425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:06:13.248368 2231425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:06:13.249847 2231425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:06:13.249931 2231425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:06:13.250073 2231425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:06:13.250217 2231425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:06:13.250335 2231425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:06:13.250415 2231425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:06:13.252088 2231425 out.go:235]   - Generating certificates and keys ...
	I0414 14:06:13.252193 2231425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:06:13.252248 2231425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:06:13.252374 2231425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 14:06:13.252466 2231425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 14:06:13.252609 2231425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 14:06:13.252718 2231425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 14:06:13.252866 2231425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 14:06:13.252961 2231425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 14:06:13.253068 2231425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 14:06:13.253167 2231425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 14:06:13.253237 2231425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 14:06:13.253315 2231425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:06:13.253390 2231425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:06:13.253473 2231425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:06:13.253561 2231425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:06:13.253644 2231425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:06:13.253775 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:06:13.253924 2231425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:06:13.253995 2231425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:06:13.254098 2231425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:06:13.255487 2231425 out.go:235]   - Booting up control plane ...
	I0414 14:06:13.255617 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:06:13.255720 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:06:13.255804 2231425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:06:13.255917 2231425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:06:13.256104 2231425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:06:13.256175 2231425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:06:13.256267 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:06:13.256462 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:06:13.256588 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:06:13.256803 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:06:13.256893 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:06:13.257063 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:06:13.257133 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:06:13.257300 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:06:13.257358 2231425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:06:13.257520 2231425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:06:13.257527 2231425 kubeadm.go:310] 
	I0414 14:06:13.257561 2231425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:06:13.257596 2231425 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:06:13.257602 2231425 kubeadm.go:310] 
	I0414 14:06:13.257633 2231425 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:06:13.257666 2231425 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:06:13.257756 2231425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:06:13.257763 2231425 kubeadm.go:310] 
	I0414 14:06:13.257905 2231425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:06:13.257941 2231425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:06:13.257973 2231425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:06:13.257979 2231425 kubeadm.go:310] 
	I0414 14:06:13.258067 2231425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:06:13.258141 2231425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:06:13.258147 2231425 kubeadm.go:310] 
	I0414 14:06:13.258242 2231425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:06:13.258337 2231425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:06:13.258417 2231425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:06:13.258496 2231425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:06:13.258579 2231425 kubeadm.go:310] 
	I0414 14:06:13.258589 2231425 kubeadm.go:394] duration metric: took 3m55.993390668s to StartCluster
	I0414 14:06:13.258657 2231425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:06:13.258719 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:06:13.308141 2231425 cri.go:89] found id: ""
	I0414 14:06:13.308178 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.308190 2231425 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:06:13.308198 2231425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:06:13.308264 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:06:13.344971 2231425 cri.go:89] found id: ""
	I0414 14:06:13.345000 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.345008 2231425 logs.go:284] No container was found matching "etcd"
	I0414 14:06:13.345013 2231425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:06:13.345064 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:06:13.391927 2231425 cri.go:89] found id: ""
	I0414 14:06:13.391968 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.391980 2231425 logs.go:284] No container was found matching "coredns"
	I0414 14:06:13.391989 2231425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:06:13.392066 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:06:13.436277 2231425 cri.go:89] found id: ""
	I0414 14:06:13.436315 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.436327 2231425 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:06:13.436336 2231425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:06:13.436407 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:06:13.471649 2231425 cri.go:89] found id: ""
	I0414 14:06:13.471688 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.471701 2231425 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:06:13.471709 2231425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:06:13.471776 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:06:13.516915 2231425 cri.go:89] found id: ""
	I0414 14:06:13.516945 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.516957 2231425 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:06:13.516966 2231425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:06:13.517036 2231425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:06:13.559136 2231425 cri.go:89] found id: ""
	I0414 14:06:13.559172 2231425 logs.go:282] 0 containers: []
	W0414 14:06:13.559183 2231425 logs.go:284] No container was found matching "kindnet"
	I0414 14:06:13.559197 2231425 logs.go:123] Gathering logs for container status ...
	I0414 14:06:13.559216 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:06:13.599676 2231425 logs.go:123] Gathering logs for kubelet ...
	I0414 14:06:13.599707 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:06:13.648298 2231425 logs.go:123] Gathering logs for dmesg ...
	I0414 14:06:13.648338 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:06:13.662883 2231425 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:06:13.662915 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:06:13.800402 2231425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:06:13.800430 2231425 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:06:13.800445 2231425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0414 14:06:13.908074 2231425 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:06:13.908158 2231425 out.go:270] * 
	* 
	W0414 14:06:13.908223 2231425 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:06:13.908237 2231425 out.go:270] * 
	* 
	W0414 14:06:13.909098 2231425 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:06:13.911951 2231425 out.go:201] 
	W0414 14:06:13.913031 2231425 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:06:13.913075 2231425 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:06:13.913092 2231425 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:06:13.914442 2231425 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 6 (247.290715ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 14:06:14.214798 2235051 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-954411" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-954411" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (307.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-954411 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-954411 create -f testdata/busybox.yaml: exit status 1 (46.679474ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-954411" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-954411 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 6 (241.686913ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 14:06:14.501348 2235090 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-954411" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-954411" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 6 (264.239194ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 14:06:14.763126 2235135 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-954411" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-954411" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-954411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-954411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.434113587s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-954411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-954411 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-954411 describe deploy/metrics-server -n kube-system: exit status 1 (46.543987ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-954411" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-954411 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 6 (235.543507ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 14:07:51.485962 2235739 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-954411" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-954411" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 14:08:48.851070 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:10:27.986445 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m30.867708964s)

                                                
                                                
-- stdout --
	* [old-k8s-version-954411] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-954411" primary control-plane node in "old-k8s-version-954411" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-954411" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:07:54.039976 2235858 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:07:54.040074 2235858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:07:54.040082 2235858 out.go:358] Setting ErrFile to fd 2...
	I0414 14:07:54.040088 2235858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:07:54.040328 2235858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:07:54.040914 2235858 out.go:352] Setting JSON to false
	I0414 14:07:54.041944 2235858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168613,"bootTime":1744471061,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:07:54.042059 2235858 start.go:139] virtualization: kvm guest
	I0414 14:07:54.043938 2235858 out.go:177] * [old-k8s-version-954411] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:07:54.045022 2235858 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:07:54.045047 2235858 notify.go:220] Checking for updates...
	I0414 14:07:54.046923 2235858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:07:54.047886 2235858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:07:54.048794 2235858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:07:54.049765 2235858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:07:54.050731 2235858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:07:54.052097 2235858 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:07:54.052674 2235858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:07:54.052798 2235858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:07:54.069857 2235858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38011
	I0414 14:07:54.070407 2235858 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:07:54.071153 2235858 main.go:141] libmachine: Using API Version  1
	I0414 14:07:54.071176 2235858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:07:54.071587 2235858 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:07:54.071808 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:07:54.073273 2235858 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 14:07:54.074559 2235858 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:07:54.074884 2235858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:07:54.074925 2235858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:07:54.090526 2235858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0414 14:07:54.090960 2235858 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:07:54.091424 2235858 main.go:141] libmachine: Using API Version  1
	I0414 14:07:54.091452 2235858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:07:54.091801 2235858 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:07:54.092015 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:07:54.129216 2235858 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:07:54.130241 2235858 start.go:297] selected driver: kvm2
	I0414 14:07:54.130256 2235858 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:07:54.130374 2235858 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:07:54.131103 2235858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:07:54.131194 2235858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:07:54.147733 2235858 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:07:54.148154 2235858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:07:54.148198 2235858 cni.go:84] Creating CNI manager for ""
	I0414 14:07:54.148247 2235858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:07:54.148287 2235858 start.go:340] cluster config:
	{Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:07:54.148415 2235858 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:07:54.149942 2235858 out.go:177] * Starting "old-k8s-version-954411" primary control-plane node in "old-k8s-version-954411" cluster
	I0414 14:07:54.150962 2235858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:07:54.151004 2235858 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 14:07:54.151014 2235858 cache.go:56] Caching tarball of preloaded images
	I0414 14:07:54.151093 2235858 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:07:54.151105 2235858 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 14:07:54.151206 2235858 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json ...
	I0414 14:07:54.151380 2235858 start.go:360] acquireMachinesLock for old-k8s-version-954411: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:07:54.151431 2235858 start.go:364] duration metric: took 32.603µs to acquireMachinesLock for "old-k8s-version-954411"
	I0414 14:07:54.151445 2235858 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:07:54.151453 2235858 fix.go:54] fixHost starting: 
	I0414 14:07:54.151702 2235858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:07:54.151733 2235858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:07:54.167108 2235858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0414 14:07:54.167560 2235858 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:07:54.168078 2235858 main.go:141] libmachine: Using API Version  1
	I0414 14:07:54.168110 2235858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:07:54.168444 2235858 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:07:54.168649 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:07:54.168825 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetState
	I0414 14:07:54.170535 2235858 fix.go:112] recreateIfNeeded on old-k8s-version-954411: state=Stopped err=<nil>
	I0414 14:07:54.170561 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	W0414 14:07:54.170705 2235858 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:07:54.172816 2235858 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-954411" ...
	I0414 14:07:54.173809 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .Start
	I0414 14:07:54.174029 2235858 main.go:141] libmachine: (old-k8s-version-954411) starting domain...
	I0414 14:07:54.174048 2235858 main.go:141] libmachine: (old-k8s-version-954411) ensuring networks are active...
	I0414 14:07:54.174815 2235858 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network default is active
	I0414 14:07:54.175205 2235858 main.go:141] libmachine: (old-k8s-version-954411) Ensuring network mk-old-k8s-version-954411 is active
	I0414 14:07:54.175524 2235858 main.go:141] libmachine: (old-k8s-version-954411) getting domain XML...
	I0414 14:07:54.176309 2235858 main.go:141] libmachine: (old-k8s-version-954411) creating domain...
	I0414 14:07:55.483963 2235858 main.go:141] libmachine: (old-k8s-version-954411) waiting for IP...
	I0414 14:07:55.484823 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:55.485275 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:55.485345 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:55.485257 2235893 retry.go:31] will retry after 256.189402ms: waiting for domain to come up
	I0414 14:07:55.742872 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:55.743516 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:55.743552 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:55.743469 2235893 retry.go:31] will retry after 390.261303ms: waiting for domain to come up
	I0414 14:07:56.135250 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:56.135851 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:56.135883 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:56.135808 2235893 retry.go:31] will retry after 401.095535ms: waiting for domain to come up
	I0414 14:07:56.538332 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:56.538898 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:56.538926 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:56.538831 2235893 retry.go:31] will retry after 405.836377ms: waiting for domain to come up
	I0414 14:07:56.946556 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:56.947160 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:56.947195 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:56.947103 2235893 retry.go:31] will retry after 506.572266ms: waiting for domain to come up
	I0414 14:07:57.454747 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:57.455238 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:57.455313 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:57.455256 2235893 retry.go:31] will retry after 939.055164ms: waiting for domain to come up
	I0414 14:07:58.396420 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:58.397024 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:58.397056 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:58.396955 2235893 retry.go:31] will retry after 1.081991027s: waiting for domain to come up
	I0414 14:07:59.481024 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:07:59.481606 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:07:59.481658 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:07:59.481602 2235893 retry.go:31] will retry after 913.474312ms: waiting for domain to come up
	I0414 14:08:00.396770 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:00.397275 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:08:00.397312 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:08:00.397261 2235893 retry.go:31] will retry after 1.60986406s: waiting for domain to come up
	I0414 14:08:02.009234 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:02.009957 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:08:02.009988 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:08:02.009902 2235893 retry.go:31] will retry after 1.513715752s: waiting for domain to come up
	I0414 14:08:03.525706 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:03.526332 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:08:03.526365 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:08:03.526305 2235893 retry.go:31] will retry after 2.35730601s: waiting for domain to come up
	I0414 14:08:05.885765 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:05.886202 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:08:05.886233 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:08:05.886141 2235893 retry.go:31] will retry after 2.847508796s: waiting for domain to come up
	I0414 14:08:08.737115 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:08.737581 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | unable to find current IP address of domain old-k8s-version-954411 in network mk-old-k8s-version-954411
	I0414 14:08:08.737602 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | I0414 14:08:08.737547 2235893 retry.go:31] will retry after 3.989566384s: waiting for domain to come up
	I0414 14:08:12.729787 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.730318 2235858 main.go:141] libmachine: (old-k8s-version-954411) found domain IP: 192.168.39.90
	I0414 14:08:12.730347 2235858 main.go:141] libmachine: (old-k8s-version-954411) reserving static IP address...
	I0414 14:08:12.730362 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has current primary IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.730883 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "old-k8s-version-954411", mac: "52:54:00:e4:99:d7", ip: "192.168.39.90"} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:12.730921 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | skip adding static IP to network mk-old-k8s-version-954411 - found existing host DHCP lease matching {name: "old-k8s-version-954411", mac: "52:54:00:e4:99:d7", ip: "192.168.39.90"}
	I0414 14:08:12.730948 2235858 main.go:141] libmachine: (old-k8s-version-954411) reserved static IP address 192.168.39.90 for domain old-k8s-version-954411
	I0414 14:08:12.730966 2235858 main.go:141] libmachine: (old-k8s-version-954411) waiting for SSH...
	I0414 14:08:12.730980 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | Getting to WaitForSSH function...
	I0414 14:08:12.733270 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.733628 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:12.733672 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.733761 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH client type: external
	I0414 14:08:12.733802 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa (-rw-------)
	I0414 14:08:12.733835 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:08:12.733858 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | About to run SSH command:
	I0414 14:08:12.733870 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | exit 0
	I0414 14:08:12.856817 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | SSH cmd err, output: <nil>: 
	I0414 14:08:12.857233 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetConfigRaw
	I0414 14:08:12.857896 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:08:12.860701 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.861152 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:12.861185 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.861432 2235858 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/config.json ...
	I0414 14:08:12.861626 2235858 machine.go:93] provisionDockerMachine start ...
	I0414 14:08:12.861645 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:12.861882 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:12.864495 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.864876 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:12.864906 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.865091 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:12.865267 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:12.865447 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:12.865589 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:12.865750 2235858 main.go:141] libmachine: Using SSH client type: native
	I0414 14:08:12.865996 2235858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:08:12.866007 2235858 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:08:12.969160 2235858 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:08:12.969193 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:08:12.969443 2235858 buildroot.go:166] provisioning hostname "old-k8s-version-954411"
	I0414 14:08:12.969470 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:08:12.969643 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:12.972548 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.973000 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:12.973029 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:12.973284 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:12.973474 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:12.973629 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:12.973773 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:12.973935 2235858 main.go:141] libmachine: Using SSH client type: native
	I0414 14:08:12.974191 2235858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:08:12.974206 2235858 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-954411 && echo "old-k8s-version-954411" | sudo tee /etc/hostname
	I0414 14:08:13.091941 2235858 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-954411
	
	I0414 14:08:13.091989 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.095133 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.095510 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.095543 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.095672 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:13.095852 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.096018 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.096196 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:13.096361 2235858 main.go:141] libmachine: Using SSH client type: native
	I0414 14:08:13.096645 2235858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:08:13.096670 2235858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-954411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-954411/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-954411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:08:13.205664 2235858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:08:13.205698 2235858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:08:13.205724 2235858 buildroot.go:174] setting up certificates
	I0414 14:08:13.205738 2235858 provision.go:84] configureAuth start
	I0414 14:08:13.205748 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetMachineName
	I0414 14:08:13.206037 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:08:13.208864 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.209352 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.209385 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.209528 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.212212 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.212535 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.212583 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.212719 2235858 provision.go:143] copyHostCerts
	I0414 14:08:13.212803 2235858 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:08:13.212823 2235858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:08:13.212886 2235858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:08:13.213007 2235858 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:08:13.213019 2235858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:08:13.213045 2235858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:08:13.213112 2235858 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:08:13.213119 2235858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:08:13.213140 2235858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:08:13.213199 2235858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-954411 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-954411]
	I0414 14:08:13.415713 2235858 provision.go:177] copyRemoteCerts
	I0414 14:08:13.415803 2235858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:08:13.415838 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.418800 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.419167 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.419220 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.419362 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:13.419590 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.419768 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:13.419913 2235858 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:08:13.504022 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:08:13.528837 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 14:08:13.556474 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:08:13.582165 2235858 provision.go:87] duration metric: took 376.413267ms to configureAuth
	I0414 14:08:13.582210 2235858 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:08:13.582412 2235858 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:08:13.582506 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.585364 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.585681 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.585711 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.585884 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:13.586087 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.586253 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.586352 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:13.586488 2235858 main.go:141] libmachine: Using SSH client type: native
	I0414 14:08:13.586700 2235858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:08:13.586717 2235858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:08:13.808333 2235858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:08:13.808359 2235858 machine.go:96] duration metric: took 946.720172ms to provisionDockerMachine
	I0414 14:08:13.808372 2235858 start.go:293] postStartSetup for "old-k8s-version-954411" (driver="kvm2")
	I0414 14:08:13.808382 2235858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:08:13.808413 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:13.808749 2235858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:08:13.808790 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.811624 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.811947 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.811972 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.812121 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:13.812335 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.812522 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:13.812686 2235858 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:08:13.897260 2235858 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:08:13.901875 2235858 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:08:13.901902 2235858 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:08:13.901965 2235858 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:08:13.902064 2235858 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:08:13.902185 2235858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:08:13.913491 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:08:13.939245 2235858 start.go:296] duration metric: took 130.858209ms for postStartSetup
	I0414 14:08:13.939286 2235858 fix.go:56] duration metric: took 19.787833071s for fixHost
	I0414 14:08:13.939309 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:13.942808 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.943265 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:13.943296 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:13.943490 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:13.943719 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.943899 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:13.944076 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:13.944287 2235858 main.go:141] libmachine: Using SSH client type: native
	I0414 14:08:13.944528 2235858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0414 14:08:13.944540 2235858 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:08:14.054142 2235858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744639694.026550616
	
	I0414 14:08:14.054170 2235858 fix.go:216] guest clock: 1744639694.026550616
	I0414 14:08:14.054182 2235858 fix.go:229] Guest: 2025-04-14 14:08:14.026550616 +0000 UTC Remote: 2025-04-14 14:08:13.939290977 +0000 UTC m=+19.939340087 (delta=87.259639ms)
	I0414 14:08:14.054229 2235858 fix.go:200] guest clock delta is within tolerance: 87.259639ms
	I0414 14:08:14.054236 2235858 start.go:83] releasing machines lock for "old-k8s-version-954411", held for 19.902795932s
	I0414 14:08:14.054261 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:14.054558 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:08:14.057665 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.058024 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:14.058055 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.058212 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:14.058707 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:14.058861 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .DriverName
	I0414 14:08:14.058961 2235858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:08:14.059018 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:14.059024 2235858 ssh_runner.go:195] Run: cat /version.json
	I0414 14:08:14.059039 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHHostname
	I0414 14:08:14.061970 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.062198 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.062332 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:14.062371 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.062488 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:14.062610 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:14.062653 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:14.062667 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:14.062862 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHPort
	I0414 14:08:14.062864 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:14.063088 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHKeyPath
	I0414 14:08:14.063094 2235858 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:08:14.063254 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetSSHUsername
	I0414 14:08:14.063436 2235858 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/old-k8s-version-954411/id_rsa Username:docker}
	I0414 14:08:14.168301 2235858 ssh_runner.go:195] Run: systemctl --version
	I0414 14:08:14.174841 2235858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:08:14.318541 2235858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:08:14.325484 2235858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:08:14.325579 2235858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:08:14.342395 2235858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:08:14.342423 2235858 start.go:495] detecting cgroup driver to use...
	I0414 14:08:14.342490 2235858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:08:14.362044 2235858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:08:14.376554 2235858 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:08:14.376611 2235858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:08:14.391071 2235858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:08:14.407276 2235858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:08:14.527186 2235858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:08:14.675644 2235858 docker.go:233] disabling docker service ...
	I0414 14:08:14.675741 2235858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:08:14.692336 2235858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:08:14.705657 2235858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:08:14.845966 2235858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:08:14.990511 2235858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:08:15.004662 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:08:15.023609 2235858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 14:08:15.023702 2235858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:08:15.034830 2235858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:08:15.034917 2235858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:08:15.046226 2235858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:08:15.058187 2235858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:08:15.070517 2235858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:08:15.082589 2235858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:08:15.093343 2235858 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:08:15.093402 2235858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:08:15.107750 2235858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:08:15.118019 2235858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:08:15.247209 2235858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:08:15.358282 2235858 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:08:15.358354 2235858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:08:15.364920 2235858 start.go:563] Will wait 60s for crictl version
	I0414 14:08:15.365021 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:15.369050 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:08:15.410011 2235858 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:08:15.410117 2235858 ssh_runner.go:195] Run: crio --version
	I0414 14:08:15.439229 2235858 ssh_runner.go:195] Run: crio --version
	I0414 14:08:15.470012 2235858 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 14:08:15.471164 2235858 main.go:141] libmachine: (old-k8s-version-954411) Calling .GetIP
	I0414 14:08:15.474274 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:15.474670 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:99:d7", ip: ""} in network mk-old-k8s-version-954411: {Iface:virbr1 ExpiryTime:2025-04-14 15:08:06 +0000 UTC Type:0 Mac:52:54:00:e4:99:d7 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-954411 Clientid:01:52:54:00:e4:99:d7}
	I0414 14:08:15.474709 2235858 main.go:141] libmachine: (old-k8s-version-954411) DBG | domain old-k8s-version-954411 has defined IP address 192.168.39.90 and MAC address 52:54:00:e4:99:d7 in network mk-old-k8s-version-954411
	I0414 14:08:15.474917 2235858 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:08:15.479192 2235858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:08:15.491999 2235858 kubeadm.go:883] updating cluster {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:08:15.492145 2235858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:08:15.492188 2235858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:08:15.550250 2235858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:08:15.550333 2235858 ssh_runner.go:195] Run: which lz4
	I0414 14:08:15.554557 2235858 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:08:15.558866 2235858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:08:15.558911 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 14:08:17.233811 2235858 crio.go:462] duration metric: took 1.679301372s to copy over tarball
	I0414 14:08:17.233894 2235858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:08:20.174871 2235858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.940945277s)
	I0414 14:08:20.174907 2235858 crio.go:469] duration metric: took 2.941065758s to extract the tarball
	I0414 14:08:20.174919 2235858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:08:20.220036 2235858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:08:20.259777 2235858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 14:08:20.259809 2235858 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 14:08:20.259907 2235858 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:08:20.260388 2235858 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.260341 2235858 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:20.260545 2235858 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 14:08:20.260608 2235858 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:20.260672 2235858 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.260698 2235858 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:20.260443 2235858 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.263106 2235858 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:20.263121 2235858 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:08:20.263132 2235858 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:20.263114 2235858 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.263158 2235858 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:20.263176 2235858 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.263175 2235858 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 14:08:20.263114 2235858 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.440745 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 14:08:20.486200 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.491166 2235858 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 14:08:20.491216 2235858 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 14:08:20.491263 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.532549 2235858 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 14:08:20.532615 2235858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.532663 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.532667 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:08:20.537271 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.577252 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:08:20.584710 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.602626 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.604287 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.604303 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:20.604524 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:20.607610 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:20.654955 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 14:08:20.660184 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 14:08:20.804338 2235858 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 14:08:20.804370 2235858 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 14:08:20.804402 2235858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.804406 2235858 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.804453 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.804453 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.846056 2235858 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 14:08:20.846098 2235858 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 14:08:20.846132 2235858 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:20.846201 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.846151 2235858 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 14:08:20.846238 2235858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:20.846102 2235858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:20.846261 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 14:08:20.846288 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.846305 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 14:08:20.846305 2235858 ssh_runner.go:195] Run: which crictl
	I0414 14:08:20.846416 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.846377 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.904543 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:20.904577 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:20.904643 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:08:20.904673 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:20.904715 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:21.017093 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:21.017171 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:21.028570 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 14:08:21.032457 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 14:08:21.032609 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:21.074724 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 14:08:21.132209 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 14:08:21.132365 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 14:08:21.160116 2235858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 14:08:21.160645 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 14:08:21.162814 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 14:08:21.194941 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 14:08:21.206175 2235858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 14:08:23.106576 2235858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:08:23.258124 2235858 cache_images.go:92] duration metric: took 2.998289424s to LoadCachedImages
	W0414 14:08:23.258230 2235858 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0414 14:08:23.258248 2235858 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I0414 14:08:23.258372 2235858 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-954411 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:08:23.258456 2235858 ssh_runner.go:195] Run: crio config
	I0414 14:08:23.306904 2235858 cni.go:84] Creating CNI manager for ""
	I0414 14:08:23.306928 2235858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:08:23.306941 2235858 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:08:23.306960 2235858 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-954411 NodeName:old-k8s-version-954411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 14:08:23.307133 2235858 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-954411"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:08:23.307214 2235858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 14:08:23.319310 2235858 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:08:23.319385 2235858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:08:23.329906 2235858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 14:08:23.347588 2235858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:08:23.365819 2235858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 14:08:23.383687 2235858 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0414 14:08:23.387750 2235858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:08:23.400806 2235858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:08:23.529592 2235858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:08:23.546915 2235858 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411 for IP: 192.168.39.90
	I0414 14:08:23.546952 2235858 certs.go:194] generating shared ca certs ...
	I0414 14:08:23.546984 2235858 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:08:23.547221 2235858 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:08:23.547293 2235858 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:08:23.547310 2235858 certs.go:256] generating profile certs ...
	I0414 14:08:23.547480 2235858 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/client.key
	I0414 14:08:23.547572 2235858 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key.798e3633
	I0414 14:08:23.547635 2235858 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key
	I0414 14:08:23.547803 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:08:23.547854 2235858 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:08:23.547864 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:08:23.547897 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:08:23.547932 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:08:23.547966 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:08:23.548028 2235858 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:08:23.548860 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:08:23.586288 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:08:23.626524 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:08:23.650822 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:08:23.707779 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:08:23.744466 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:08:23.786409 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:08:23.818428 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/old-k8s-version-954411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:08:23.845205 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:08:23.870713 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:08:23.894504 2235858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:08:23.918397 2235858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:08:23.935271 2235858 ssh_runner.go:195] Run: openssl version
	I0414 14:08:23.941222 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:08:23.954009 2235858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:08:23.959661 2235858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:08:23.959720 2235858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:08:23.965868 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:08:23.976433 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:08:23.986801 2235858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:08:23.991029 2235858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:08:23.991080 2235858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:08:23.997058 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:08:24.007781 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:08:24.018560 2235858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:08:24.023170 2235858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:08:24.023245 2235858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:08:24.028803 2235858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:08:24.039062 2235858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:08:24.044090 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:08:24.049986 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:08:24.055950 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:08:24.062050 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:08:24.068303 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:08:24.075000 2235858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:08:24.081065 2235858 kubeadm.go:392] StartCluster: {Name:old-k8s-version-954411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-954411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:08:24.081141 2235858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:08:24.081222 2235858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:08:24.121722 2235858 cri.go:89] found id: ""
	I0414 14:08:24.121790 2235858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:08:24.132608 2235858 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:08:24.132630 2235858 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:08:24.132674 2235858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:08:24.143632 2235858 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:08:24.144368 2235858 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-954411" does not appear in /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:08:24.144770 2235858 kubeconfig.go:62] /home/jenkins/minikube-integration/20623-2183077/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-954411" cluster setting kubeconfig missing "old-k8s-version-954411" context setting]
	I0414 14:08:24.145316 2235858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:08:24.146669 2235858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:08:24.155933 2235858 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0414 14:08:24.155967 2235858 kubeadm.go:1160] stopping kube-system containers ...
	I0414 14:08:24.155982 2235858 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 14:08:24.156036 2235858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:08:24.194100 2235858 cri.go:89] found id: ""
	I0414 14:08:24.194181 2235858 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 14:08:24.210153 2235858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:08:24.220047 2235858 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:08:24.220067 2235858 kubeadm.go:157] found existing configuration files:
	
	I0414 14:08:24.220107 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:08:24.229389 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:08:24.229453 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:08:24.238793 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:08:24.247529 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:08:24.247587 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:08:24.257002 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:08:24.265627 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:08:24.265686 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:08:24.274862 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:08:24.283486 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:08:24.283552 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:08:24.293100 2235858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:08:24.302893 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:08:24.450845 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:08:25.257173 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:08:25.501023 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:08:25.607047 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 14:08:25.702638 2235858 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:08:25.702741 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:26.203742 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:26.703721 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:27.203388 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:27.703471 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:28.203660 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:28.703634 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:29.202966 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:29.703778 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:30.203089 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:30.703707 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:31.203051 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:31.702886 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:32.202796 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:32.703407 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:33.202978 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:33.703443 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:34.202852 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:34.703678 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:35.202901 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:35.703495 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:36.203650 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:36.703565 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:37.203080 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:37.703789 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:38.203086 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:38.703655 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:39.202844 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:39.703672 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:40.202983 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:40.703759 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:41.203224 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:41.703813 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:42.203733 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:42.702976 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:43.203099 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:43.702802 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:44.203434 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:44.703033 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:45.202944 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:45.702995 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:46.203097 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:46.703720 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:47.203632 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:47.702998 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:48.203188 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:48.703503 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:49.202826 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:49.703213 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:50.203145 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:50.702885 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:51.203261 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:51.703303 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:52.203790 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:52.703195 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:53.203826 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:53.703059 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:54.203691 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:54.703769 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:55.203216 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:55.703392 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:56.203004 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:56.703026 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:57.203789 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:57.702806 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:58.203821 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:58.703760 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:59.203035 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:08:59.703747 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:00.202796 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:00.702999 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:01.202968 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:01.702995 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:02.203127 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:02.703731 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:03.203813 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:03.703269 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:04.203142 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:04.703851 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:05.203701 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:05.703166 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:06.203137 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:06.703071 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:07.202996 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:07.703433 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:08.202888 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:08.703800 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:09.203120 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:09.703765 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:10.203007 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:10.702801 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:11.203304 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:11.702891 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:12.202876 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:12.703293 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:13.203561 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:13.703268 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:14.203096 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:14.702930 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:15.203172 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:15.703402 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:16.203066 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:16.702873 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:17.203764 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:17.703181 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:18.203095 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:18.703183 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:19.202820 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:19.703714 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:20.203411 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:20.703563 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:21.203679 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:21.702990 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:22.203448 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:22.703728 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:23.202869 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:23.702995 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:24.203104 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:24.703306 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:25.203767 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:25.703065 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:25.703144 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:25.755976 2235858 cri.go:89] found id: ""
	I0414 14:09:25.756012 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.756022 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:25.756030 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:25.756083 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:25.798611 2235858 cri.go:89] found id: ""
	I0414 14:09:25.798644 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.798657 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:25.798665 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:25.798724 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:25.837085 2235858 cri.go:89] found id: ""
	I0414 14:09:25.837112 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.837121 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:25.837127 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:25.837177 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:25.873399 2235858 cri.go:89] found id: ""
	I0414 14:09:25.873434 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.873451 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:25.873459 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:25.873510 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:25.907458 2235858 cri.go:89] found id: ""
	I0414 14:09:25.907482 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.907490 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:25.907496 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:25.907559 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:25.946667 2235858 cri.go:89] found id: ""
	I0414 14:09:25.946706 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.946717 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:25.946726 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:25.946789 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:25.983346 2235858 cri.go:89] found id: ""
	I0414 14:09:25.983378 2235858 logs.go:282] 0 containers: []
	W0414 14:09:25.983387 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:25.983393 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:25.983474 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:26.022822 2235858 cri.go:89] found id: ""
	I0414 14:09:26.022854 2235858 logs.go:282] 0 containers: []
	W0414 14:09:26.022865 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:26.022878 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:26.022895 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:26.037151 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:26.037195 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:26.166970 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:26.167006 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:26.167027 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:26.242917 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:26.242956 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:26.283408 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:26.283453 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:28.836878 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:28.852279 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:28.852350 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:28.897693 2235858 cri.go:89] found id: ""
	I0414 14:09:28.897722 2235858 logs.go:282] 0 containers: []
	W0414 14:09:28.897730 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:28.897738 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:28.897815 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:28.937972 2235858 cri.go:89] found id: ""
	I0414 14:09:28.938014 2235858 logs.go:282] 0 containers: []
	W0414 14:09:28.938027 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:28.938036 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:28.938111 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:28.986150 2235858 cri.go:89] found id: ""
	I0414 14:09:28.986179 2235858 logs.go:282] 0 containers: []
	W0414 14:09:28.986189 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:28.986197 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:28.986266 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:29.030604 2235858 cri.go:89] found id: ""
	I0414 14:09:29.030644 2235858 logs.go:282] 0 containers: []
	W0414 14:09:29.030656 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:29.030663 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:29.030732 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:29.087499 2235858 cri.go:89] found id: ""
	I0414 14:09:29.087528 2235858 logs.go:282] 0 containers: []
	W0414 14:09:29.087536 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:29.087542 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:29.087604 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:29.125455 2235858 cri.go:89] found id: ""
	I0414 14:09:29.125493 2235858 logs.go:282] 0 containers: []
	W0414 14:09:29.125505 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:29.125513 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:29.125568 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:29.161114 2235858 cri.go:89] found id: ""
	I0414 14:09:29.161150 2235858 logs.go:282] 0 containers: []
	W0414 14:09:29.161168 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:29.161176 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:29.161237 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:29.197435 2235858 cri.go:89] found id: ""
	I0414 14:09:29.197463 2235858 logs.go:282] 0 containers: []
	W0414 14:09:29.197471 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:29.197481 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:29.197492 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:29.247616 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:29.247652 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:29.261107 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:29.261149 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:29.334680 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:29.334717 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:29.334736 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:29.414771 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:29.414814 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:31.959854 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:31.974882 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:31.974954 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:32.009801 2235858 cri.go:89] found id: ""
	I0414 14:09:32.009832 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.009840 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:32.009846 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:32.009947 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:32.046974 2235858 cri.go:89] found id: ""
	I0414 14:09:32.047025 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.047037 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:32.047044 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:32.047109 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:32.084442 2235858 cri.go:89] found id: ""
	I0414 14:09:32.084480 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.084489 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:32.084495 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:32.084565 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:32.121310 2235858 cri.go:89] found id: ""
	I0414 14:09:32.121340 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.121349 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:32.121355 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:32.121407 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:32.156486 2235858 cri.go:89] found id: ""
	I0414 14:09:32.156521 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.156528 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:32.156535 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:32.156596 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:32.189103 2235858 cri.go:89] found id: ""
	I0414 14:09:32.189128 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.189138 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:32.189147 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:32.189211 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:32.221986 2235858 cri.go:89] found id: ""
	I0414 14:09:32.222023 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.222032 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:32.222039 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:32.222108 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:32.257128 2235858 cri.go:89] found id: ""
	I0414 14:09:32.257161 2235858 logs.go:282] 0 containers: []
	W0414 14:09:32.257173 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:32.257184 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:32.257201 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:32.297867 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:32.297908 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:32.351858 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:32.351895 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:32.367052 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:32.367082 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:32.446834 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:32.446861 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:32.446879 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:35.028166 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:35.048129 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:35.048193 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:35.082619 2235858 cri.go:89] found id: ""
	I0414 14:09:35.082650 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.082658 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:35.082665 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:35.082717 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:35.118389 2235858 cri.go:89] found id: ""
	I0414 14:09:35.118423 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.118434 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:35.118441 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:35.118503 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:35.154989 2235858 cri.go:89] found id: ""
	I0414 14:09:35.155024 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.155037 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:35.155046 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:35.155112 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:35.189628 2235858 cri.go:89] found id: ""
	I0414 14:09:35.189658 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.189667 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:35.189675 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:35.189734 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:35.224592 2235858 cri.go:89] found id: ""
	I0414 14:09:35.224632 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.224642 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:35.224652 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:35.224759 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:35.258202 2235858 cri.go:89] found id: ""
	I0414 14:09:35.258234 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.258243 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:35.258249 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:35.258303 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:35.292458 2235858 cri.go:89] found id: ""
	I0414 14:09:35.292491 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.292500 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:35.292505 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:35.292560 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:35.327778 2235858 cri.go:89] found id: ""
	I0414 14:09:35.327808 2235858 logs.go:282] 0 containers: []
	W0414 14:09:35.327817 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:35.327827 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:35.327842 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:35.342252 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:35.342292 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:35.418723 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:35.418756 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:35.418775 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:35.498363 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:35.498407 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:35.540399 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:35.540439 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:38.099467 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:38.114799 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:38.114868 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:38.166024 2235858 cri.go:89] found id: ""
	I0414 14:09:38.166053 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.166061 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:38.166067 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:38.166120 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:38.201508 2235858 cri.go:89] found id: ""
	I0414 14:09:38.201535 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.201544 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:38.201552 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:38.201612 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:38.239959 2235858 cri.go:89] found id: ""
	I0414 14:09:38.239999 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.240008 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:38.240014 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:38.240108 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:38.273842 2235858 cri.go:89] found id: ""
	I0414 14:09:38.273872 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.273900 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:38.273908 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:38.273990 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:38.309129 2235858 cri.go:89] found id: ""
	I0414 14:09:38.309161 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.309170 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:38.309176 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:38.309242 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:38.352796 2235858 cri.go:89] found id: ""
	I0414 14:09:38.352830 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.352841 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:38.352849 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:38.352915 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:38.386612 2235858 cri.go:89] found id: ""
	I0414 14:09:38.386640 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.386647 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:38.386653 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:38.386706 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:38.423588 2235858 cri.go:89] found id: ""
	I0414 14:09:38.423630 2235858 logs.go:282] 0 containers: []
	W0414 14:09:38.423640 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:38.423651 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:38.423664 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:38.498204 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:38.498227 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:38.498241 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:38.578822 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:38.578857 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:38.616121 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:38.616153 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:38.670694 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:38.670737 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:41.187582 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:41.200820 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:41.200884 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:41.233623 2235858 cri.go:89] found id: ""
	I0414 14:09:41.233661 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.233676 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:41.233685 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:41.233757 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:41.266907 2235858 cri.go:89] found id: ""
	I0414 14:09:41.266948 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.266961 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:41.266970 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:41.267041 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:41.302604 2235858 cri.go:89] found id: ""
	I0414 14:09:41.302639 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.302660 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:41.302669 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:41.302733 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:41.339244 2235858 cri.go:89] found id: ""
	I0414 14:09:41.339278 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.339290 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:41.339299 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:41.339362 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:41.377204 2235858 cri.go:89] found id: ""
	I0414 14:09:41.377240 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.377271 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:41.377279 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:41.377346 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:41.421069 2235858 cri.go:89] found id: ""
	I0414 14:09:41.421102 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.421110 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:41.421116 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:41.421194 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:41.455523 2235858 cri.go:89] found id: ""
	I0414 14:09:41.455557 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.455568 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:41.455576 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:41.455645 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:41.492668 2235858 cri.go:89] found id: ""
	I0414 14:09:41.492710 2235858 logs.go:282] 0 containers: []
	W0414 14:09:41.492723 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:41.492757 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:41.492776 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:41.542345 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:41.542390 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:41.557148 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:41.557193 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:41.629098 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:41.629131 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:41.629149 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:41.705514 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:41.705567 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:44.253092 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:44.267775 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:44.267851 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:44.303060 2235858 cri.go:89] found id: ""
	I0414 14:09:44.303095 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.303103 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:44.303109 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:44.303169 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:44.340477 2235858 cri.go:89] found id: ""
	I0414 14:09:44.340505 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.340513 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:44.340519 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:44.340572 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:44.380535 2235858 cri.go:89] found id: ""
	I0414 14:09:44.380564 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.380573 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:44.380579 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:44.380633 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:44.423418 2235858 cri.go:89] found id: ""
	I0414 14:09:44.423455 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.423464 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:44.423484 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:44.423548 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:44.458292 2235858 cri.go:89] found id: ""
	I0414 14:09:44.458320 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.458330 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:44.458339 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:44.458402 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:44.493985 2235858 cri.go:89] found id: ""
	I0414 14:09:44.494020 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.494032 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:44.494040 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:44.494107 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:44.528508 2235858 cri.go:89] found id: ""
	I0414 14:09:44.528542 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.528554 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:44.528562 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:44.528631 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:44.563417 2235858 cri.go:89] found id: ""
	I0414 14:09:44.563452 2235858 logs.go:282] 0 containers: []
	W0414 14:09:44.563465 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:44.563477 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:44.563492 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:44.617131 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:44.617170 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:44.630249 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:44.630276 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:44.706251 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:44.706274 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:44.706288 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:44.784950 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:44.784991 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:47.332411 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:47.346258 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:47.346344 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:47.382793 2235858 cri.go:89] found id: ""
	I0414 14:09:47.382825 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.382835 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:47.382843 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:47.382927 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:47.419514 2235858 cri.go:89] found id: ""
	I0414 14:09:47.419551 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.419564 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:47.419571 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:47.419646 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:47.458142 2235858 cri.go:89] found id: ""
	I0414 14:09:47.458177 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.458188 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:47.458199 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:47.458261 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:47.494629 2235858 cri.go:89] found id: ""
	I0414 14:09:47.494668 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.494681 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:47.494689 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:47.494769 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:47.531730 2235858 cri.go:89] found id: ""
	I0414 14:09:47.531766 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.531778 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:47.531786 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:47.531850 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:47.567295 2235858 cri.go:89] found id: ""
	I0414 14:09:47.567338 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.567349 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:47.567357 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:47.567423 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:47.603575 2235858 cri.go:89] found id: ""
	I0414 14:09:47.603606 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.603613 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:47.603624 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:47.603682 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:47.637027 2235858 cri.go:89] found id: ""
	I0414 14:09:47.637054 2235858 logs.go:282] 0 containers: []
	W0414 14:09:47.637062 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:47.637072 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:47.637086 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:47.713770 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:47.713809 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:47.754148 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:47.754198 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:47.806756 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:47.806796 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:47.820175 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:47.820210 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:47.891921 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:50.392857 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:50.406993 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:50.407088 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:50.444297 2235858 cri.go:89] found id: ""
	I0414 14:09:50.444337 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.444348 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:50.444364 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:50.444441 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:50.479936 2235858 cri.go:89] found id: ""
	I0414 14:09:50.479968 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.479979 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:50.479994 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:50.480071 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:50.518575 2235858 cri.go:89] found id: ""
	I0414 14:09:50.518612 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.518621 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:50.518626 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:50.518690 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:50.555027 2235858 cri.go:89] found id: ""
	I0414 14:09:50.555060 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.555071 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:50.555079 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:50.555135 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:50.593056 2235858 cri.go:89] found id: ""
	I0414 14:09:50.593085 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.593098 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:50.593107 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:50.593174 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:50.629064 2235858 cri.go:89] found id: ""
	I0414 14:09:50.629095 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.629108 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:50.629117 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:50.629197 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:50.663158 2235858 cri.go:89] found id: ""
	I0414 14:09:50.663190 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.663199 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:50.663206 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:50.663260 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:50.699039 2235858 cri.go:89] found id: ""
	I0414 14:09:50.699069 2235858 logs.go:282] 0 containers: []
	W0414 14:09:50.699077 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:50.699086 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:50.699099 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:50.755763 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:50.755804 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:50.769325 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:50.769355 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:50.841735 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:50.841759 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:50.841771 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:50.921570 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:50.921619 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:53.469129 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:53.483592 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:53.483671 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:53.518636 2235858 cri.go:89] found id: ""
	I0414 14:09:53.518664 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.518673 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:53.518679 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:53.518735 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:53.554446 2235858 cri.go:89] found id: ""
	I0414 14:09:53.554487 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.554500 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:53.554508 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:53.554581 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:53.591928 2235858 cri.go:89] found id: ""
	I0414 14:09:53.591978 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.591986 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:53.591992 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:53.592044 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:53.632226 2235858 cri.go:89] found id: ""
	I0414 14:09:53.632268 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.632276 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:53.632282 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:53.632337 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:53.669124 2235858 cri.go:89] found id: ""
	I0414 14:09:53.669157 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.669165 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:53.669172 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:53.669240 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:53.705773 2235858 cri.go:89] found id: ""
	I0414 14:09:53.705807 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.705818 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:53.705826 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:53.705918 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:53.737841 2235858 cri.go:89] found id: ""
	I0414 14:09:53.737870 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.737877 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:53.737885 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:53.737935 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:53.772554 2235858 cri.go:89] found id: ""
	I0414 14:09:53.772582 2235858 logs.go:282] 0 containers: []
	W0414 14:09:53.772590 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:53.772602 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:53.772612 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:53.848019 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:53.848052 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:53.848069 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:53.925407 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:53.925459 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:53.966061 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:53.966101 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:54.015950 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:54.015990 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:56.532027 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:56.545265 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:56.545332 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:56.585852 2235858 cri.go:89] found id: ""
	I0414 14:09:56.585889 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.585901 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:56.585911 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:56.585982 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:56.628354 2235858 cri.go:89] found id: ""
	I0414 14:09:56.628387 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.628399 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:56.628406 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:56.628493 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:56.664187 2235858 cri.go:89] found id: ""
	I0414 14:09:56.664228 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.664241 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:56.664249 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:56.664318 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:56.700434 2235858 cri.go:89] found id: ""
	I0414 14:09:56.700464 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.700472 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:56.700477 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:56.700527 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:56.738760 2235858 cri.go:89] found id: ""
	I0414 14:09:56.738785 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.738794 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:56.738800 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:56.738853 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:56.777441 2235858 cri.go:89] found id: ""
	I0414 14:09:56.777472 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.777480 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:56.777487 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:56.777541 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:56.815369 2235858 cri.go:89] found id: ""
	I0414 14:09:56.815402 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.815414 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:56.815422 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:56.815493 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:56.850126 2235858 cri.go:89] found id: ""
	I0414 14:09:56.850157 2235858 logs.go:282] 0 containers: []
	W0414 14:09:56.850165 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:56.850175 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:56.850191 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:56.907214 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:56.907255 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:56.922325 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:56.922360 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:09:56.992221 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:09:56.992247 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:09:56.992262 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:09:57.072023 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:09:57.072055 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:09:59.613967 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:09:59.627467 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:09:59.627534 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:09:59.667059 2235858 cri.go:89] found id: ""
	I0414 14:09:59.667092 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.667109 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:09:59.667118 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:09:59.667181 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:09:59.708805 2235858 cri.go:89] found id: ""
	I0414 14:09:59.708835 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.708844 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:09:59.708850 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:09:59.708904 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:09:59.747867 2235858 cri.go:89] found id: ""
	I0414 14:09:59.747909 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.747922 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:09:59.747928 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:09:59.748000 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:09:59.784235 2235858 cri.go:89] found id: ""
	I0414 14:09:59.784274 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.784286 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:09:59.784306 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:09:59.784387 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:09:59.819302 2235858 cri.go:89] found id: ""
	I0414 14:09:59.819337 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.819349 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:09:59.819357 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:09:59.819409 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:09:59.855636 2235858 cri.go:89] found id: ""
	I0414 14:09:59.855670 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.855682 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:09:59.855692 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:09:59.855759 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:09:59.891482 2235858 cri.go:89] found id: ""
	I0414 14:09:59.891509 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.891517 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:09:59.891522 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:09:59.891571 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:09:59.934697 2235858 cri.go:89] found id: ""
	I0414 14:09:59.934745 2235858 logs.go:282] 0 containers: []
	W0414 14:09:59.934759 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:09:59.934773 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:09:59.934792 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:09:59.985011 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:09:59.985047 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:09:59.998885 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:09:59.998915 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:00.073646 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:00.073675 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:00.073693 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:00.150689 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:00.150736 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:02.690528 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:02.704120 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:02.704219 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:02.740189 2235858 cri.go:89] found id: ""
	I0414 14:10:02.740230 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.740245 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:02.740256 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:02.740316 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:02.773909 2235858 cri.go:89] found id: ""
	I0414 14:10:02.773938 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.773946 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:02.773959 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:02.774011 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:02.808250 2235858 cri.go:89] found id: ""
	I0414 14:10:02.808288 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.808300 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:02.808306 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:02.808359 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:02.842719 2235858 cri.go:89] found id: ""
	I0414 14:10:02.842753 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.842762 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:02.842768 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:02.842827 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:02.875731 2235858 cri.go:89] found id: ""
	I0414 14:10:02.875763 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.875772 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:02.875779 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:02.875848 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:02.909317 2235858 cri.go:89] found id: ""
	I0414 14:10:02.909345 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.909353 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:02.909360 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:02.909414 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:02.943021 2235858 cri.go:89] found id: ""
	I0414 14:10:02.943057 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.943068 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:02.943076 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:02.943152 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:02.976720 2235858 cri.go:89] found id: ""
	I0414 14:10:02.976772 2235858 logs.go:282] 0 containers: []
	W0414 14:10:02.976783 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:02.976795 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:02.976808 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:03.053778 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:03.053834 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:03.109179 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:03.109210 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:03.161262 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:03.161308 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:03.174622 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:03.174650 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:03.259838 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:05.760893 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:05.775496 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:05.775564 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:05.812977 2235858 cri.go:89] found id: ""
	I0414 14:10:05.813016 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.813028 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:05.813037 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:05.813117 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:05.847886 2235858 cri.go:89] found id: ""
	I0414 14:10:05.847920 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.847928 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:05.847934 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:05.848001 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:05.883984 2235858 cri.go:89] found id: ""
	I0414 14:10:05.884018 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.884028 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:05.884034 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:05.884089 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:05.919342 2235858 cri.go:89] found id: ""
	I0414 14:10:05.919367 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.919374 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:05.919380 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:05.919437 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:05.954321 2235858 cri.go:89] found id: ""
	I0414 14:10:05.954348 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.954356 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:05.954362 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:05.954431 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:05.994107 2235858 cri.go:89] found id: ""
	I0414 14:10:05.994136 2235858 logs.go:282] 0 containers: []
	W0414 14:10:05.994144 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:05.994150 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:05.994202 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:06.029697 2235858 cri.go:89] found id: ""
	I0414 14:10:06.029740 2235858 logs.go:282] 0 containers: []
	W0414 14:10:06.029753 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:06.029761 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:06.029835 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:06.074865 2235858 cri.go:89] found id: ""
	I0414 14:10:06.074899 2235858 logs.go:282] 0 containers: []
	W0414 14:10:06.074909 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:06.074921 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:06.074936 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:06.124808 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:06.124849 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:06.138914 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:06.138952 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:06.216064 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:06.216096 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:06.216115 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:06.298750 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:06.298804 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:08.839475 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:08.852609 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:08.852698 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:08.889214 2235858 cri.go:89] found id: ""
	I0414 14:10:08.889251 2235858 logs.go:282] 0 containers: []
	W0414 14:10:08.889263 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:08.889271 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:08.889344 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:08.924754 2235858 cri.go:89] found id: ""
	I0414 14:10:08.924786 2235858 logs.go:282] 0 containers: []
	W0414 14:10:08.924798 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:08.924806 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:08.924874 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:08.961341 2235858 cri.go:89] found id: ""
	I0414 14:10:08.961373 2235858 logs.go:282] 0 containers: []
	W0414 14:10:08.961382 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:08.961388 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:08.961454 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:08.994518 2235858 cri.go:89] found id: ""
	I0414 14:10:08.994551 2235858 logs.go:282] 0 containers: []
	W0414 14:10:08.994559 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:08.994566 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:08.994619 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:09.032867 2235858 cri.go:89] found id: ""
	I0414 14:10:09.032902 2235858 logs.go:282] 0 containers: []
	W0414 14:10:09.032910 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:09.032916 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:09.032990 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:09.071189 2235858 cri.go:89] found id: ""
	I0414 14:10:09.071217 2235858 logs.go:282] 0 containers: []
	W0414 14:10:09.071227 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:09.071241 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:09.071309 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:09.105576 2235858 cri.go:89] found id: ""
	I0414 14:10:09.105602 2235858 logs.go:282] 0 containers: []
	W0414 14:10:09.105616 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:09.105622 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:09.105674 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:09.141718 2235858 cri.go:89] found id: ""
	I0414 14:10:09.141749 2235858 logs.go:282] 0 containers: []
	W0414 14:10:09.141761 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:09.141775 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:09.141795 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:09.189727 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:09.189762 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:09.240235 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:09.240278 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:09.254427 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:09.254468 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:09.332056 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:09.332098 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:09.332125 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:11.916882 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:11.930512 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:11.930612 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:11.977449 2235858 cri.go:89] found id: ""
	I0414 14:10:11.977490 2235858 logs.go:282] 0 containers: []
	W0414 14:10:11.977502 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:11.977511 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:11.977584 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:12.012333 2235858 cri.go:89] found id: ""
	I0414 14:10:12.012363 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.012371 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:12.012379 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:12.012446 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:12.051783 2235858 cri.go:89] found id: ""
	I0414 14:10:12.051812 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.051823 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:12.051831 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:12.051902 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:12.093390 2235858 cri.go:89] found id: ""
	I0414 14:10:12.093423 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.093432 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:12.093438 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:12.093499 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:12.128412 2235858 cri.go:89] found id: ""
	I0414 14:10:12.128447 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.128456 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:12.128462 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:12.128525 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:12.164812 2235858 cri.go:89] found id: ""
	I0414 14:10:12.164843 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.164851 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:12.164859 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:12.164915 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:12.202270 2235858 cri.go:89] found id: ""
	I0414 14:10:12.202302 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.202311 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:12.202316 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:12.202384 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:12.239186 2235858 cri.go:89] found id: ""
	I0414 14:10:12.239227 2235858 logs.go:282] 0 containers: []
	W0414 14:10:12.239239 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:12.239253 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:12.239271 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:12.276803 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:12.276835 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:12.330499 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:12.330544 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:12.344792 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:12.344828 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:12.411268 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:12.411296 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:12.411313 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:14.994594 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:15.009683 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:15.009762 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:15.054548 2235858 cri.go:89] found id: ""
	I0414 14:10:15.054579 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.054590 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:15.054600 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:15.054669 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:15.094782 2235858 cri.go:89] found id: ""
	I0414 14:10:15.094820 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.094832 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:15.094840 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:15.094914 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:15.132757 2235858 cri.go:89] found id: ""
	I0414 14:10:15.132789 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.132799 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:15.132806 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:15.132866 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:15.173492 2235858 cri.go:89] found id: ""
	I0414 14:10:15.173525 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.173534 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:15.173539 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:15.173592 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:15.233879 2235858 cri.go:89] found id: ""
	I0414 14:10:15.233915 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.233925 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:15.233933 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:15.234040 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:15.292960 2235858 cri.go:89] found id: ""
	I0414 14:10:15.293000 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.293014 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:15.293023 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:15.293089 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:15.341109 2235858 cri.go:89] found id: ""
	I0414 14:10:15.341140 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.341151 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:15.341158 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:15.341237 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:15.378084 2235858 cri.go:89] found id: ""
	I0414 14:10:15.378116 2235858 logs.go:282] 0 containers: []
	W0414 14:10:15.378127 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:15.378142 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:15.378158 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:15.452664 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:15.452698 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:15.452715 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:15.542005 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:15.542057 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:15.587011 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:15.587059 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:15.646197 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:15.646248 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:18.160905 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:18.174987 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:18.175065 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:18.208377 2235858 cri.go:89] found id: ""
	I0414 14:10:18.208408 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.208418 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:18.208424 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:18.208490 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:18.245667 2235858 cri.go:89] found id: ""
	I0414 14:10:18.245695 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.245702 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:18.245709 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:18.245770 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:18.284777 2235858 cri.go:89] found id: ""
	I0414 14:10:18.284808 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.284816 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:18.284821 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:18.284880 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:18.319833 2235858 cri.go:89] found id: ""
	I0414 14:10:18.319864 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.319872 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:18.319878 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:18.319934 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:18.356151 2235858 cri.go:89] found id: ""
	I0414 14:10:18.356179 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.356189 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:18.356195 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:18.356257 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:18.394692 2235858 cri.go:89] found id: ""
	I0414 14:10:18.394730 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.394738 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:18.394744 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:18.394798 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:18.428992 2235858 cri.go:89] found id: ""
	I0414 14:10:18.429024 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.429033 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:18.429041 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:18.429094 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:18.463569 2235858 cri.go:89] found id: ""
	I0414 14:10:18.463600 2235858 logs.go:282] 0 containers: []
	W0414 14:10:18.463607 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:18.463617 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:18.463629 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:18.515021 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:18.515065 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:18.528250 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:18.528281 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:18.598813 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:18.598846 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:18.598866 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:18.681558 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:18.681595 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:21.221252 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:21.235766 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:21.235880 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:21.272526 2235858 cri.go:89] found id: ""
	I0414 14:10:21.272552 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.272562 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:21.272568 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:21.272623 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:21.305965 2235858 cri.go:89] found id: ""
	I0414 14:10:21.306002 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.306010 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:21.306017 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:21.306073 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:21.338951 2235858 cri.go:89] found id: ""
	I0414 14:10:21.338980 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.338988 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:21.338994 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:21.339064 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:21.375163 2235858 cri.go:89] found id: ""
	I0414 14:10:21.375200 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.375212 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:21.375220 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:21.375293 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:21.412166 2235858 cri.go:89] found id: ""
	I0414 14:10:21.412214 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.412262 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:21.412275 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:21.412357 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:21.450843 2235858 cri.go:89] found id: ""
	I0414 14:10:21.450881 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.450894 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:21.450904 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:21.450976 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:21.486884 2235858 cri.go:89] found id: ""
	I0414 14:10:21.486914 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.486923 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:21.486929 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:21.486983 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:21.524175 2235858 cri.go:89] found id: ""
	I0414 14:10:21.524205 2235858 logs.go:282] 0 containers: []
	W0414 14:10:21.524213 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:21.524223 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:21.524236 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:21.537498 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:21.537527 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:21.611727 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:21.611750 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:21.611764 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:21.691675 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:21.691713 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:21.736264 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:21.736297 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:24.290827 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:24.304724 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:24.304820 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:24.342883 2235858 cri.go:89] found id: ""
	I0414 14:10:24.342925 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.342937 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:24.342945 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:24.343012 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:24.377358 2235858 cri.go:89] found id: ""
	I0414 14:10:24.377388 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.377396 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:24.377402 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:24.377457 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:24.412375 2235858 cri.go:89] found id: ""
	I0414 14:10:24.412412 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.412426 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:24.412434 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:24.412503 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:24.446823 2235858 cri.go:89] found id: ""
	I0414 14:10:24.446853 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.446861 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:24.446867 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:24.446925 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:24.480970 2235858 cri.go:89] found id: ""
	I0414 14:10:24.481002 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.481010 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:24.481017 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:24.481075 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:24.514819 2235858 cri.go:89] found id: ""
	I0414 14:10:24.514847 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.514855 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:24.514862 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:24.514920 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:24.553511 2235858 cri.go:89] found id: ""
	I0414 14:10:24.553541 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.553549 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:24.553556 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:24.553614 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:24.590445 2235858 cri.go:89] found id: ""
	I0414 14:10:24.590484 2235858 logs.go:282] 0 containers: []
	W0414 14:10:24.590495 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:24.590508 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:24.590523 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:24.644474 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:24.644520 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:24.658568 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:24.658599 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:24.727591 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:24.727617 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:24.727631 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:24.811053 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:24.811095 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:27.356910 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:27.375144 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:27.375228 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:27.417709 2235858 cri.go:89] found id: ""
	I0414 14:10:27.417762 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.417775 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:27.417786 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:27.417878 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:27.459766 2235858 cri.go:89] found id: ""
	I0414 14:10:27.459803 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.459816 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:27.459823 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:27.459892 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:27.500315 2235858 cri.go:89] found id: ""
	I0414 14:10:27.500349 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.500361 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:27.500369 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:27.500427 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:27.543154 2235858 cri.go:89] found id: ""
	I0414 14:10:27.543188 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.543200 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:27.543209 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:27.543275 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:27.578861 2235858 cri.go:89] found id: ""
	I0414 14:10:27.578896 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.578906 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:27.578914 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:27.578984 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:27.613485 2235858 cri.go:89] found id: ""
	I0414 14:10:27.613519 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.613530 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:27.613538 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:27.613607 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:27.653053 2235858 cri.go:89] found id: ""
	I0414 14:10:27.653090 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.653102 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:27.653110 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:27.653181 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:27.696766 2235858 cri.go:89] found id: ""
	I0414 14:10:27.696801 2235858 logs.go:282] 0 containers: []
	W0414 14:10:27.696812 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:27.696824 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:27.696841 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:27.711577 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:27.711602 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:27.785047 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:27.785090 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:27.785108 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:27.866535 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:27.866592 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:27.906979 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:27.907024 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:30.456851 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:30.470649 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:30.470735 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:30.511228 2235858 cri.go:89] found id: ""
	I0414 14:10:30.511267 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.511280 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:30.511288 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:30.511357 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:30.554071 2235858 cri.go:89] found id: ""
	I0414 14:10:30.554117 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.554130 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:30.554138 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:30.554219 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:30.600304 2235858 cri.go:89] found id: ""
	I0414 14:10:30.600399 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.600418 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:30.600427 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:30.600529 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:30.644823 2235858 cri.go:89] found id: ""
	I0414 14:10:30.644865 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.644876 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:30.644885 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:30.644950 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:30.689793 2235858 cri.go:89] found id: ""
	I0414 14:10:30.689824 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.689846 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:30.689857 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:30.689923 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:30.725739 2235858 cri.go:89] found id: ""
	I0414 14:10:30.725781 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.725793 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:30.725802 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:30.725876 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:30.763166 2235858 cri.go:89] found id: ""
	I0414 14:10:30.763202 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.763211 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:30.763218 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:30.763283 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:30.799649 2235858 cri.go:89] found id: ""
	I0414 14:10:30.799683 2235858 logs.go:282] 0 containers: []
	W0414 14:10:30.799694 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:30.799707 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:30.799723 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:30.851051 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:30.851093 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:30.867868 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:30.867901 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:30.942846 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:30.942881 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:30.942900 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:31.037370 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:31.037414 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:33.580855 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:33.595014 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:33.595088 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:33.629457 2235858 cri.go:89] found id: ""
	I0414 14:10:33.629508 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.629519 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:33.629526 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:33.629580 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:33.670844 2235858 cri.go:89] found id: ""
	I0414 14:10:33.670879 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.670891 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:33.670898 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:33.670968 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:33.717819 2235858 cri.go:89] found id: ""
	I0414 14:10:33.717855 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.717867 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:33.717875 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:33.717944 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:33.763428 2235858 cri.go:89] found id: ""
	I0414 14:10:33.763474 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.763485 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:33.763493 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:33.763608 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:33.813496 2235858 cri.go:89] found id: ""
	I0414 14:10:33.813531 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.813543 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:33.813551 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:33.813624 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:33.852468 2235858 cri.go:89] found id: ""
	I0414 14:10:33.852506 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.852518 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:33.852527 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:33.852588 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:33.891455 2235858 cri.go:89] found id: ""
	I0414 14:10:33.891510 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.891522 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:33.891530 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:33.891603 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:33.935922 2235858 cri.go:89] found id: ""
	I0414 14:10:33.935954 2235858 logs.go:282] 0 containers: []
	W0414 14:10:33.935963 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:33.935982 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:33.935996 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:33.991838 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:33.991885 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:34.006162 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:34.006191 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:34.078539 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:34.078574 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:34.078591 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:34.166197 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:34.166243 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:36.708895 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:36.722606 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:36.722678 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:36.758895 2235858 cri.go:89] found id: ""
	I0414 14:10:36.758923 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.758931 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:36.758938 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:36.759003 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:36.794661 2235858 cri.go:89] found id: ""
	I0414 14:10:36.794697 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.794715 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:36.794723 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:36.794800 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:36.833283 2235858 cri.go:89] found id: ""
	I0414 14:10:36.833312 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.833322 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:36.833330 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:36.833406 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:36.868381 2235858 cri.go:89] found id: ""
	I0414 14:10:36.868412 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.868422 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:36.868437 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:36.868509 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:36.908001 2235858 cri.go:89] found id: ""
	I0414 14:10:36.908041 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.908053 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:36.908068 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:36.908132 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:36.948415 2235858 cri.go:89] found id: ""
	I0414 14:10:36.948451 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.948464 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:36.948472 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:36.948546 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:36.992909 2235858 cri.go:89] found id: ""
	I0414 14:10:36.992940 2235858 logs.go:282] 0 containers: []
	W0414 14:10:36.992948 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:36.992954 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:36.993020 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:37.033104 2235858 cri.go:89] found id: ""
	I0414 14:10:37.033140 2235858 logs.go:282] 0 containers: []
	W0414 14:10:37.033149 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:37.033160 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:37.033172 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:37.089536 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:37.089571 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:37.104197 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:37.104234 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:37.188090 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:37.188119 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:37.188137 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:37.275346 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:37.275407 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:39.819836 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:39.837396 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:39.837495 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:39.880544 2235858 cri.go:89] found id: ""
	I0414 14:10:39.880579 2235858 logs.go:282] 0 containers: []
	W0414 14:10:39.880588 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:39.880596 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:39.880669 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:39.918915 2235858 cri.go:89] found id: ""
	I0414 14:10:39.918951 2235858 logs.go:282] 0 containers: []
	W0414 14:10:39.918964 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:39.918974 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:39.919044 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:39.958008 2235858 cri.go:89] found id: ""
	I0414 14:10:39.958039 2235858 logs.go:282] 0 containers: []
	W0414 14:10:39.958047 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:39.958054 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:39.958118 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:40.003104 2235858 cri.go:89] found id: ""
	I0414 14:10:40.003142 2235858 logs.go:282] 0 containers: []
	W0414 14:10:40.003154 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:40.003162 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:40.003231 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:40.052061 2235858 cri.go:89] found id: ""
	I0414 14:10:40.052102 2235858 logs.go:282] 0 containers: []
	W0414 14:10:40.052117 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:40.052127 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:40.052200 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:40.120716 2235858 cri.go:89] found id: ""
	I0414 14:10:40.120775 2235858 logs.go:282] 0 containers: []
	W0414 14:10:40.120788 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:40.120797 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:40.120863 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:40.170018 2235858 cri.go:89] found id: ""
	I0414 14:10:40.170065 2235858 logs.go:282] 0 containers: []
	W0414 14:10:40.170086 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:40.170096 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:40.170172 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:40.216635 2235858 cri.go:89] found id: ""
	I0414 14:10:40.216673 2235858 logs.go:282] 0 containers: []
	W0414 14:10:40.216683 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:40.216695 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:40.216710 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:40.306397 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:40.306421 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:40.306437 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:40.397360 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:40.397399 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:40.443606 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:40.443657 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:40.505269 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:40.505312 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:43.020899 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:43.034828 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:43.034914 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:43.073549 2235858 cri.go:89] found id: ""
	I0414 14:10:43.073582 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.073594 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:43.073603 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:43.073680 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:43.115513 2235858 cri.go:89] found id: ""
	I0414 14:10:43.115543 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.115555 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:43.115564 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:43.115628 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:43.154652 2235858 cri.go:89] found id: ""
	I0414 14:10:43.154686 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.154708 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:43.154717 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:43.154788 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:43.193249 2235858 cri.go:89] found id: ""
	I0414 14:10:43.193280 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.193292 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:43.193299 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:43.193375 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:43.232010 2235858 cri.go:89] found id: ""
	I0414 14:10:43.232046 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.232054 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:43.232061 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:43.232135 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:43.272315 2235858 cri.go:89] found id: ""
	I0414 14:10:43.272344 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.272353 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:43.272360 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:43.272424 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:43.332102 2235858 cri.go:89] found id: ""
	I0414 14:10:43.332137 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.332149 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:43.332158 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:43.332221 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:43.369967 2235858 cri.go:89] found id: ""
	I0414 14:10:43.370001 2235858 logs.go:282] 0 containers: []
	W0414 14:10:43.370014 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:43.370026 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:43.370044 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:43.441768 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:43.441823 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:43.460296 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:43.460334 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:43.548400 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:43.548433 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:43.548450 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:43.646835 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:43.646907 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:46.190894 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:46.204876 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:46.204958 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:46.241547 2235858 cri.go:89] found id: ""
	I0414 14:10:46.241576 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.241584 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:46.241591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:46.241647 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:46.279310 2235858 cri.go:89] found id: ""
	I0414 14:10:46.279364 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.279375 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:46.279383 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:46.279456 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:46.313048 2235858 cri.go:89] found id: ""
	I0414 14:10:46.313076 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.313085 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:46.313091 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:46.313148 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:46.347747 2235858 cri.go:89] found id: ""
	I0414 14:10:46.347777 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.347788 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:46.347797 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:46.347864 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:46.381667 2235858 cri.go:89] found id: ""
	I0414 14:10:46.381701 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.381710 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:46.381717 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:46.381767 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:46.416831 2235858 cri.go:89] found id: ""
	I0414 14:10:46.416859 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.416867 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:46.416873 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:46.416931 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:46.453486 2235858 cri.go:89] found id: ""
	I0414 14:10:46.453525 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.453538 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:46.453546 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:46.453613 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:46.489479 2235858 cri.go:89] found id: ""
	I0414 14:10:46.489509 2235858 logs.go:282] 0 containers: []
	W0414 14:10:46.489521 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:46.489535 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:46.489552 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:46.545124 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:46.545174 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:46.558747 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:46.558779 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:46.633641 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:46.633669 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:46.633691 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:46.715006 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:46.715053 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:49.258348 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:49.272326 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:49.272409 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:49.312980 2235858 cri.go:89] found id: ""
	I0414 14:10:49.313010 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.313020 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:49.313029 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:49.313098 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:49.351947 2235858 cri.go:89] found id: ""
	I0414 14:10:49.351984 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.351995 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:49.352002 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:49.352076 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:49.391064 2235858 cri.go:89] found id: ""
	I0414 14:10:49.391107 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.391129 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:49.391138 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:49.391208 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:49.436174 2235858 cri.go:89] found id: ""
	I0414 14:10:49.436211 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.436224 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:49.436232 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:49.436299 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:49.481280 2235858 cri.go:89] found id: ""
	I0414 14:10:49.481318 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.481331 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:49.481339 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:49.481405 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:49.519376 2235858 cri.go:89] found id: ""
	I0414 14:10:49.519418 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.519432 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:49.519442 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:49.519513 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:49.567806 2235858 cri.go:89] found id: ""
	I0414 14:10:49.567840 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.567851 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:49.567867 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:49.567932 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:49.622835 2235858 cri.go:89] found id: ""
	I0414 14:10:49.622864 2235858 logs.go:282] 0 containers: []
	W0414 14:10:49.622874 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:49.622887 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:49.622902 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:49.682451 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:49.682492 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:49.703770 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:49.703801 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:49.835517 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:49.835557 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:49.835582 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:49.914811 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:49.914852 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:52.454498 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:52.470429 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:52.470521 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:52.516231 2235858 cri.go:89] found id: ""
	I0414 14:10:52.516264 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.516275 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:52.516284 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:52.516343 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:52.557680 2235858 cri.go:89] found id: ""
	I0414 14:10:52.557711 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.557722 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:52.557730 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:52.557799 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:52.602399 2235858 cri.go:89] found id: ""
	I0414 14:10:52.602434 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.602446 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:52.602454 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:52.602528 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:52.652032 2235858 cri.go:89] found id: ""
	I0414 14:10:52.652066 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.652086 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:52.652098 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:52.652182 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:52.693133 2235858 cri.go:89] found id: ""
	I0414 14:10:52.693163 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.693172 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:52.693177 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:52.693258 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:52.734044 2235858 cri.go:89] found id: ""
	I0414 14:10:52.734086 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.734112 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:52.734121 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:52.734207 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:52.775088 2235858 cri.go:89] found id: ""
	I0414 14:10:52.775128 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.775139 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:52.775147 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:52.775216 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:52.807956 2235858 cri.go:89] found id: ""
	I0414 14:10:52.807990 2235858 logs.go:282] 0 containers: []
	W0414 14:10:52.808001 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:52.808013 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:52.808030 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:52.851857 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:52.851898 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:52.907587 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:52.907631 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:52.923504 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:52.923547 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:52.993150 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:52.993178 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:52.993194 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:55.595540 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:55.614063 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:55.614154 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:55.669722 2235858 cri.go:89] found id: ""
	I0414 14:10:55.669759 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.669770 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:55.669778 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:55.669842 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:55.732410 2235858 cri.go:89] found id: ""
	I0414 14:10:55.732440 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.732450 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:55.732459 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:55.732521 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:55.775996 2235858 cri.go:89] found id: ""
	I0414 14:10:55.776038 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.776050 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:55.776057 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:55.776130 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:55.821900 2235858 cri.go:89] found id: ""
	I0414 14:10:55.821943 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.821954 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:55.821963 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:55.822032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:55.859876 2235858 cri.go:89] found id: ""
	I0414 14:10:55.859908 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.859919 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:55.859928 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:55.860000 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:55.904849 2235858 cri.go:89] found id: ""
	I0414 14:10:55.904902 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.904914 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:55.904923 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:55.905002 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:55.943674 2235858 cri.go:89] found id: ""
	I0414 14:10:55.943707 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.943715 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:55.943722 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:55.943774 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:55.991664 2235858 cri.go:89] found id: ""
	I0414 14:10:55.991702 2235858 logs.go:282] 0 containers: []
	W0414 14:10:55.991715 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:55.991725 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:55.991736 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:56.075895 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:56.075946 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:10:56.115581 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:56.115617 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:56.173587 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:56.173641 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:56.188269 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:56.188303 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:56.265131 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:58.766386 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:10:58.778883 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:10:58.778965 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:10:58.812214 2235858 cri.go:89] found id: ""
	I0414 14:10:58.812248 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.812257 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:10:58.812263 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:10:58.812325 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:10:58.847490 2235858 cri.go:89] found id: ""
	I0414 14:10:58.847521 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.847533 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:10:58.847541 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:10:58.847618 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:10:58.879530 2235858 cri.go:89] found id: ""
	I0414 14:10:58.879560 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.879571 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:10:58.879580 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:10:58.879651 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:10:58.913328 2235858 cri.go:89] found id: ""
	I0414 14:10:58.913357 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.913364 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:10:58.913371 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:10:58.913425 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:10:58.948598 2235858 cri.go:89] found id: ""
	I0414 14:10:58.948633 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.948643 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:10:58.948651 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:10:58.948708 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:10:58.988136 2235858 cri.go:89] found id: ""
	I0414 14:10:58.988168 2235858 logs.go:282] 0 containers: []
	W0414 14:10:58.988175 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:10:58.988182 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:10:58.988233 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:10:59.021135 2235858 cri.go:89] found id: ""
	I0414 14:10:59.021165 2235858 logs.go:282] 0 containers: []
	W0414 14:10:59.021173 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:10:59.021180 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:10:59.021244 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:10:59.055225 2235858 cri.go:89] found id: ""
	I0414 14:10:59.055253 2235858 logs.go:282] 0 containers: []
	W0414 14:10:59.055262 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:10:59.055275 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:10:59.055300 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:10:59.106846 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:10:59.106883 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:10:59.120828 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:10:59.120860 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:10:59.183706 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:10:59.183733 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:10:59.183748 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:10:59.271125 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:10:59.271180 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:01.812880 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:01.828261 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:01.828335 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:01.872688 2235858 cri.go:89] found id: ""
	I0414 14:11:01.872749 2235858 logs.go:282] 0 containers: []
	W0414 14:11:01.872764 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:01.872775 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:01.872859 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:01.910378 2235858 cri.go:89] found id: ""
	I0414 14:11:01.910422 2235858 logs.go:282] 0 containers: []
	W0414 14:11:01.910434 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:01.910443 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:01.910517 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:01.948907 2235858 cri.go:89] found id: ""
	I0414 14:11:01.948950 2235858 logs.go:282] 0 containers: []
	W0414 14:11:01.948962 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:01.948971 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:01.949038 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:01.991421 2235858 cri.go:89] found id: ""
	I0414 14:11:01.991452 2235858 logs.go:282] 0 containers: []
	W0414 14:11:01.991460 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:01.991467 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:01.991520 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:02.028836 2235858 cri.go:89] found id: ""
	I0414 14:11:02.028880 2235858 logs.go:282] 0 containers: []
	W0414 14:11:02.028893 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:02.028900 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:02.028966 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:02.064656 2235858 cri.go:89] found id: ""
	I0414 14:11:02.064688 2235858 logs.go:282] 0 containers: []
	W0414 14:11:02.064696 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:02.064702 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:02.064778 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:02.104636 2235858 cri.go:89] found id: ""
	I0414 14:11:02.104666 2235858 logs.go:282] 0 containers: []
	W0414 14:11:02.104674 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:02.104680 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:02.104764 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:02.142415 2235858 cri.go:89] found id: ""
	I0414 14:11:02.142450 2235858 logs.go:282] 0 containers: []
	W0414 14:11:02.142462 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:02.142472 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:02.142488 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:02.215886 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:02.215919 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:02.215937 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:02.293931 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:02.293977 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:02.335295 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:02.335327 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:02.386660 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:02.386697 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:04.902065 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:04.915775 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:04.915843 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:04.949715 2235858 cri.go:89] found id: ""
	I0414 14:11:04.949748 2235858 logs.go:282] 0 containers: []
	W0414 14:11:04.949759 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:04.949767 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:04.949829 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:04.986574 2235858 cri.go:89] found id: ""
	I0414 14:11:04.986602 2235858 logs.go:282] 0 containers: []
	W0414 14:11:04.986612 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:04.986621 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:04.986684 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:05.024622 2235858 cri.go:89] found id: ""
	I0414 14:11:05.024652 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.024662 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:05.024669 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:05.024745 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:05.064814 2235858 cri.go:89] found id: ""
	I0414 14:11:05.064841 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.064850 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:05.064856 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:05.064902 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:05.102676 2235858 cri.go:89] found id: ""
	I0414 14:11:05.102698 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.102706 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:05.102712 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:05.102760 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:05.136866 2235858 cri.go:89] found id: ""
	I0414 14:11:05.136899 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.136921 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:05.136930 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:05.136995 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:05.169405 2235858 cri.go:89] found id: ""
	I0414 14:11:05.169438 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.169450 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:05.169457 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:05.169524 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:05.205534 2235858 cri.go:89] found id: ""
	I0414 14:11:05.205566 2235858 logs.go:282] 0 containers: []
	W0414 14:11:05.205578 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:05.205590 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:05.205608 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:05.220335 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:05.220365 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:05.293328 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:05.293354 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:05.293374 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:05.378280 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:05.378315 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:05.416198 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:05.416242 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:07.968827 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:07.987257 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:07.987326 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:08.028350 2235858 cri.go:89] found id: ""
	I0414 14:11:08.028375 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.028384 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:08.028391 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:08.028447 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:08.085143 2235858 cri.go:89] found id: ""
	I0414 14:11:08.085173 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.085183 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:08.085191 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:08.085243 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:08.125918 2235858 cri.go:89] found id: ""
	I0414 14:11:08.125951 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.125962 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:08.125970 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:08.126032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:08.173198 2235858 cri.go:89] found id: ""
	I0414 14:11:08.173233 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.173245 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:08.173253 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:08.173314 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:08.219116 2235858 cri.go:89] found id: ""
	I0414 14:11:08.219153 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.219162 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:08.219167 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:08.219237 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:08.262937 2235858 cri.go:89] found id: ""
	I0414 14:11:08.262965 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.262973 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:08.262979 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:08.263044 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:08.307413 2235858 cri.go:89] found id: ""
	I0414 14:11:08.307443 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.307454 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:08.307462 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:08.307526 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:08.346843 2235858 cri.go:89] found id: ""
	I0414 14:11:08.346875 2235858 logs.go:282] 0 containers: []
	W0414 14:11:08.346886 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:08.346896 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:08.346911 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:08.363520 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:08.363555 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:08.432906 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:08.432929 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:08.432943 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:08.513567 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:08.513608 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:08.554661 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:08.554711 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:11.120882 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:11.138616 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:11.138710 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:11.182093 2235858 cri.go:89] found id: ""
	I0414 14:11:11.182138 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.182149 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:11.182164 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:11.182234 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:11.219970 2235858 cri.go:89] found id: ""
	I0414 14:11:11.220000 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.220011 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:11.220018 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:11.220087 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:11.260204 2235858 cri.go:89] found id: ""
	I0414 14:11:11.260234 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.260245 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:11.260252 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:11.260325 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:11.299299 2235858 cri.go:89] found id: ""
	I0414 14:11:11.299332 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.299344 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:11.299354 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:11.299417 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:11.340119 2235858 cri.go:89] found id: ""
	I0414 14:11:11.340151 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.340162 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:11.340172 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:11.340233 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:11.388149 2235858 cri.go:89] found id: ""
	I0414 14:11:11.388271 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.388293 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:11.388310 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:11.388409 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:11.432984 2235858 cri.go:89] found id: ""
	I0414 14:11:11.433023 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.433035 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:11.433042 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:11.433112 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:11.482242 2235858 cri.go:89] found id: ""
	I0414 14:11:11.482273 2235858 logs.go:282] 0 containers: []
	W0414 14:11:11.482282 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:11.482303 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:11.482320 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:11.536660 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:11.536700 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:11.601432 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:11.601479 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:11.619612 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:11.619718 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:11.701929 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:11.701959 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:11.701975 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:14.292974 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:14.307759 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:14.307841 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:14.351473 2235858 cri.go:89] found id: ""
	I0414 14:11:14.351511 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.351523 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:14.351532 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:14.351605 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:14.393427 2235858 cri.go:89] found id: ""
	I0414 14:11:14.393463 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.393474 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:14.393482 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:14.393556 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:14.433437 2235858 cri.go:89] found id: ""
	I0414 14:11:14.433471 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.433483 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:14.433492 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:14.433564 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:14.481082 2235858 cri.go:89] found id: ""
	I0414 14:11:14.481117 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.481129 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:14.481137 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:14.481225 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:14.518045 2235858 cri.go:89] found id: ""
	I0414 14:11:14.518080 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.518091 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:14.518099 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:14.518176 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:14.558509 2235858 cri.go:89] found id: ""
	I0414 14:11:14.558564 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.558576 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:14.558585 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:14.558650 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:14.604249 2235858 cri.go:89] found id: ""
	I0414 14:11:14.604288 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.604308 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:14.604315 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:14.604385 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:14.652765 2235858 cri.go:89] found id: ""
	I0414 14:11:14.652803 2235858 logs.go:282] 0 containers: []
	W0414 14:11:14.652815 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:14.652828 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:14.652843 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:14.731657 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:14.731696 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:14.781506 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:14.781559 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:14.844618 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:14.844674 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:14.861624 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:14.861652 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:14.939526 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:17.440852 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:17.455298 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:17.455378 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:17.495711 2235858 cri.go:89] found id: ""
	I0414 14:11:17.495752 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.495765 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:17.495774 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:17.495836 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:17.534517 2235858 cri.go:89] found id: ""
	I0414 14:11:17.534552 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.534565 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:17.534573 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:17.534643 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:17.575130 2235858 cri.go:89] found id: ""
	I0414 14:11:17.575162 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.575173 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:17.575181 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:17.575249 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:17.608529 2235858 cri.go:89] found id: ""
	I0414 14:11:17.608571 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.608583 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:17.608591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:17.608654 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:17.654684 2235858 cri.go:89] found id: ""
	I0414 14:11:17.654731 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.654745 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:17.654753 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:17.654845 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:17.704156 2235858 cri.go:89] found id: ""
	I0414 14:11:17.704194 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.704206 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:17.704215 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:17.704281 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:17.749582 2235858 cri.go:89] found id: ""
	I0414 14:11:17.749619 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.749631 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:17.749639 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:17.749716 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:17.787678 2235858 cri.go:89] found id: ""
	I0414 14:11:17.787779 2235858 logs.go:282] 0 containers: []
	W0414 14:11:17.787798 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:17.787812 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:17.787846 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:17.833927 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:17.833966 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:17.918476 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:17.918541 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:17.936189 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:17.936238 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:18.015167 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:18.015199 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:18.015219 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:20.607802 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:20.622843 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:20.622936 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:20.665470 2235858 cri.go:89] found id: ""
	I0414 14:11:20.665507 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.665525 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:20.665533 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:20.665589 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:20.707505 2235858 cri.go:89] found id: ""
	I0414 14:11:20.707536 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.707548 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:20.707556 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:20.707621 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:20.756478 2235858 cri.go:89] found id: ""
	I0414 14:11:20.756512 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.756522 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:20.756530 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:20.756604 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:20.798331 2235858 cri.go:89] found id: ""
	I0414 14:11:20.798370 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.798384 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:20.798391 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:20.798460 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:20.841054 2235858 cri.go:89] found id: ""
	I0414 14:11:20.841091 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.841103 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:20.841114 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:20.841184 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:20.883254 2235858 cri.go:89] found id: ""
	I0414 14:11:20.883289 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.883302 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:20.883310 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:20.883384 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:20.929322 2235858 cri.go:89] found id: ""
	I0414 14:11:20.929348 2235858 logs.go:282] 0 containers: []
	W0414 14:11:20.929358 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:20.929366 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:20.929418 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:21.012938 2235858 cri.go:89] found id: ""
	I0414 14:11:21.012959 2235858 logs.go:282] 0 containers: []
	W0414 14:11:21.012966 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:21.012975 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:21.012990 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:21.091386 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:21.091512 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:21.108373 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:21.108416 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:21.189064 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:21.189093 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:21.189113 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:21.278208 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:21.278259 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:23.824895 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:23.840061 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:23.840155 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:23.901788 2235858 cri.go:89] found id: ""
	I0414 14:11:23.901818 2235858 logs.go:282] 0 containers: []
	W0414 14:11:23.901827 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:23.901836 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:23.901902 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:23.952019 2235858 cri.go:89] found id: ""
	I0414 14:11:23.952053 2235858 logs.go:282] 0 containers: []
	W0414 14:11:23.952064 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:23.952073 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:23.952147 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:23.995399 2235858 cri.go:89] found id: ""
	I0414 14:11:23.995433 2235858 logs.go:282] 0 containers: []
	W0414 14:11:23.995444 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:23.995453 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:23.995516 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:24.051882 2235858 cri.go:89] found id: ""
	I0414 14:11:24.051933 2235858 logs.go:282] 0 containers: []
	W0414 14:11:24.051961 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:24.051969 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:24.052058 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:24.104357 2235858 cri.go:89] found id: ""
	I0414 14:11:24.104401 2235858 logs.go:282] 0 containers: []
	W0414 14:11:24.104414 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:24.104429 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:24.104525 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:24.149723 2235858 cri.go:89] found id: ""
	I0414 14:11:24.149760 2235858 logs.go:282] 0 containers: []
	W0414 14:11:24.149772 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:24.149781 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:24.149854 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:24.200101 2235858 cri.go:89] found id: ""
	I0414 14:11:24.200203 2235858 logs.go:282] 0 containers: []
	W0414 14:11:24.200222 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:24.200233 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:24.200328 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:24.255969 2235858 cri.go:89] found id: ""
	I0414 14:11:24.256005 2235858 logs.go:282] 0 containers: []
	W0414 14:11:24.256014 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:24.256024 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:24.256048 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:24.309432 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:24.309475 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:24.327440 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:24.327473 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:24.415851 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:24.415881 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:24.415897 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:24.492844 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:24.492889 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:27.048859 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:27.064866 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:27.064954 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:27.106451 2235858 cri.go:89] found id: ""
	I0414 14:11:27.106508 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.106521 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:27.106529 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:27.106634 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:27.145105 2235858 cri.go:89] found id: ""
	I0414 14:11:27.145143 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.145166 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:27.145174 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:27.145243 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:27.182955 2235858 cri.go:89] found id: ""
	I0414 14:11:27.182993 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.183005 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:27.183014 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:27.183082 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:27.222547 2235858 cri.go:89] found id: ""
	I0414 14:11:27.222582 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.222595 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:27.222603 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:27.222666 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:27.261131 2235858 cri.go:89] found id: ""
	I0414 14:11:27.261174 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.261185 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:27.261193 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:27.261254 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:27.299142 2235858 cri.go:89] found id: ""
	I0414 14:11:27.299176 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.299189 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:27.299197 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:27.299282 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:27.336873 2235858 cri.go:89] found id: ""
	I0414 14:11:27.336907 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.336918 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:27.336926 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:27.336995 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:27.372914 2235858 cri.go:89] found id: ""
	I0414 14:11:27.372949 2235858 logs.go:282] 0 containers: []
	W0414 14:11:27.372962 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:27.372974 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:27.372991 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:27.454861 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:27.454892 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:27.454919 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:27.540448 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:27.540488 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:27.578607 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:27.578673 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:27.634315 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:27.634354 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:30.148848 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:30.167470 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:30.167539 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:30.205995 2235858 cri.go:89] found id: ""
	I0414 14:11:30.206037 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.206049 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:30.206057 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:30.206124 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:30.246667 2235858 cri.go:89] found id: ""
	I0414 14:11:30.246704 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.246716 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:30.246724 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:30.246786 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:30.289307 2235858 cri.go:89] found id: ""
	I0414 14:11:30.289343 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.289354 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:30.289362 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:30.289429 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:30.349492 2235858 cri.go:89] found id: ""
	I0414 14:11:30.349531 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.349543 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:30.349556 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:30.349623 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:30.397107 2235858 cri.go:89] found id: ""
	I0414 14:11:30.397145 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.397158 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:30.397166 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:30.397236 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:30.431044 2235858 cri.go:89] found id: ""
	I0414 14:11:30.431073 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.431081 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:30.431088 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:30.431162 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:30.465871 2235858 cri.go:89] found id: ""
	I0414 14:11:30.465898 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.465906 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:30.465913 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:30.465981 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:30.503696 2235858 cri.go:89] found id: ""
	I0414 14:11:30.503731 2235858 logs.go:282] 0 containers: []
	W0414 14:11:30.503743 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:30.503759 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:30.503777 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:30.558888 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:30.558932 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:30.575333 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:30.575365 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:30.649025 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:30.649054 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:30.649075 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:30.731316 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:30.731360 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:33.273954 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:33.287866 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:33.287936 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:33.334463 2235858 cri.go:89] found id: ""
	I0414 14:11:33.334500 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.334512 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:33.334520 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:33.334586 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:33.384398 2235858 cri.go:89] found id: ""
	I0414 14:11:33.384430 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.384439 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:33.384447 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:33.384517 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:33.422544 2235858 cri.go:89] found id: ""
	I0414 14:11:33.422583 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.422598 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:33.422608 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:33.422680 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:33.461126 2235858 cri.go:89] found id: ""
	I0414 14:11:33.461165 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.461193 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:33.461202 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:33.461280 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:33.499474 2235858 cri.go:89] found id: ""
	I0414 14:11:33.499508 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.499517 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:33.499523 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:33.499579 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:33.541536 2235858 cri.go:89] found id: ""
	I0414 14:11:33.541575 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.541587 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:33.541595 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:33.541674 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:33.588114 2235858 cri.go:89] found id: ""
	I0414 14:11:33.588148 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.588158 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:33.588165 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:33.588240 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:33.635373 2235858 cri.go:89] found id: ""
	I0414 14:11:33.635407 2235858 logs.go:282] 0 containers: []
	W0414 14:11:33.635416 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:33.635426 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:33.635441 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:33.651258 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:33.651293 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:33.749091 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:33.749117 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:33.749135 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:33.838398 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:33.838442 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:33.877387 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:33.877427 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:36.436877 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:36.450372 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:36.450467 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:36.506108 2235858 cri.go:89] found id: ""
	I0414 14:11:36.506140 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.506165 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:36.506174 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:36.506233 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:36.555083 2235858 cri.go:89] found id: ""
	I0414 14:11:36.555111 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.555123 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:36.555132 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:36.555214 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:36.596825 2235858 cri.go:89] found id: ""
	I0414 14:11:36.596865 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.596879 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:36.596888 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:36.596971 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:36.644443 2235858 cri.go:89] found id: ""
	I0414 14:11:36.644480 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.644492 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:36.644500 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:36.644571 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:36.687389 2235858 cri.go:89] found id: ""
	I0414 14:11:36.687426 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.687438 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:36.687446 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:36.687522 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:36.734870 2235858 cri.go:89] found id: ""
	I0414 14:11:36.734900 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.734912 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:36.734920 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:36.734984 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:36.776913 2235858 cri.go:89] found id: ""
	I0414 14:11:36.776943 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.776951 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:36.776957 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:36.777016 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:36.822535 2235858 cri.go:89] found id: ""
	I0414 14:11:36.822569 2235858 logs.go:282] 0 containers: []
	W0414 14:11:36.822581 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:36.822594 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:36.822611 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:36.838966 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:36.839005 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:36.938722 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:36.938755 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:36.938775 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:37.022158 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:37.022207 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:37.075587 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:37.075629 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:39.633104 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:39.651717 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:39.651804 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:39.696105 2235858 cri.go:89] found id: ""
	I0414 14:11:39.696146 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.696159 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:39.696168 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:39.696235 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:39.735324 2235858 cri.go:89] found id: ""
	I0414 14:11:39.735356 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.735366 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:39.735372 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:39.735443 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:39.776293 2235858 cri.go:89] found id: ""
	I0414 14:11:39.776332 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.776346 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:39.776354 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:39.776434 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:39.812968 2235858 cri.go:89] found id: ""
	I0414 14:11:39.813006 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.813018 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:39.813026 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:39.813092 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:39.850164 2235858 cri.go:89] found id: ""
	I0414 14:11:39.850200 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.850212 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:39.850220 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:39.850284 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:39.888229 2235858 cri.go:89] found id: ""
	I0414 14:11:39.888263 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.888275 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:39.888284 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:39.888354 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:39.933603 2235858 cri.go:89] found id: ""
	I0414 14:11:39.933640 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.933653 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:39.933661 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:39.933737 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:39.976091 2235858 cri.go:89] found id: ""
	I0414 14:11:39.976119 2235858 logs.go:282] 0 containers: []
	W0414 14:11:39.976127 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:39.976138 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:39.976149 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:39.989615 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:39.989646 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:40.067558 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:40.067584 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:40.067599 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:40.152470 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:40.152526 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:40.200573 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:40.200606 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:42.751219 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:42.765053 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:42.765150 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:42.801702 2235858 cri.go:89] found id: ""
	I0414 14:11:42.801730 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.801742 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:42.801758 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:42.801822 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:42.834453 2235858 cri.go:89] found id: ""
	I0414 14:11:42.834482 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.834490 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:42.834496 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:42.834547 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:42.867425 2235858 cri.go:89] found id: ""
	I0414 14:11:42.867456 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.867464 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:42.867470 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:42.867555 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:42.902216 2235858 cri.go:89] found id: ""
	I0414 14:11:42.902243 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.902251 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:42.902257 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:42.902314 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:42.936494 2235858 cri.go:89] found id: ""
	I0414 14:11:42.936536 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.936546 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:42.936553 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:42.936610 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:42.971535 2235858 cri.go:89] found id: ""
	I0414 14:11:42.971570 2235858 logs.go:282] 0 containers: []
	W0414 14:11:42.971582 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:42.971590 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:42.971655 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:43.016768 2235858 cri.go:89] found id: ""
	I0414 14:11:43.016800 2235858 logs.go:282] 0 containers: []
	W0414 14:11:43.016811 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:43.016817 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:43.016884 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:43.059974 2235858 cri.go:89] found id: ""
	I0414 14:11:43.060004 2235858 logs.go:282] 0 containers: []
	W0414 14:11:43.060011 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:43.060021 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:43.060034 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:43.114895 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:43.114938 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:43.132824 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:43.132863 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:43.207642 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:43.207670 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:43.207693 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:43.288025 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:43.288081 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:45.838348 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:45.856526 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:45.856617 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:45.896672 2235858 cri.go:89] found id: ""
	I0414 14:11:45.896713 2235858 logs.go:282] 0 containers: []
	W0414 14:11:45.896738 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:45.896746 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:45.896817 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:45.936570 2235858 cri.go:89] found id: ""
	I0414 14:11:45.936606 2235858 logs.go:282] 0 containers: []
	W0414 14:11:45.936615 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:45.936620 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:45.936691 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:45.976719 2235858 cri.go:89] found id: ""
	I0414 14:11:45.976767 2235858 logs.go:282] 0 containers: []
	W0414 14:11:45.976778 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:45.976787 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:45.976849 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:46.017227 2235858 cri.go:89] found id: ""
	I0414 14:11:46.017268 2235858 logs.go:282] 0 containers: []
	W0414 14:11:46.017281 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:46.017289 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:46.017372 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:46.055076 2235858 cri.go:89] found id: ""
	I0414 14:11:46.055113 2235858 logs.go:282] 0 containers: []
	W0414 14:11:46.055127 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:46.055135 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:46.055249 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:46.097601 2235858 cri.go:89] found id: ""
	I0414 14:11:46.097637 2235858 logs.go:282] 0 containers: []
	W0414 14:11:46.097650 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:46.097659 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:46.097735 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:46.139799 2235858 cri.go:89] found id: ""
	I0414 14:11:46.139831 2235858 logs.go:282] 0 containers: []
	W0414 14:11:46.139844 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:46.139853 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:46.139929 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:46.174785 2235858 cri.go:89] found id: ""
	I0414 14:11:46.174821 2235858 logs.go:282] 0 containers: []
	W0414 14:11:46.174835 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:46.174848 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:46.174871 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:46.227843 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:46.227887 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:46.242613 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:46.242653 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:46.338265 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:46.338294 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:46.338315 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:46.418199 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:46.418248 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:48.959774 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:48.976025 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:48.976093 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:49.025675 2235858 cri.go:89] found id: ""
	I0414 14:11:49.025707 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.025733 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:49.025757 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:49.025823 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:49.067150 2235858 cri.go:89] found id: ""
	I0414 14:11:49.067192 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.067219 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:49.067227 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:49.067283 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:49.106919 2235858 cri.go:89] found id: ""
	I0414 14:11:49.106956 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.106966 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:49.106973 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:49.107039 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:49.152633 2235858 cri.go:89] found id: ""
	I0414 14:11:49.152665 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.152677 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:49.152684 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:49.152820 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:49.198001 2235858 cri.go:89] found id: ""
	I0414 14:11:49.198029 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.198039 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:49.198047 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:49.198105 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:49.248846 2235858 cri.go:89] found id: ""
	I0414 14:11:49.248880 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.248897 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:49.248913 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:49.248985 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:49.296660 2235858 cri.go:89] found id: ""
	I0414 14:11:49.296692 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.296703 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:49.296711 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:49.296799 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:49.338659 2235858 cri.go:89] found id: ""
	I0414 14:11:49.338685 2235858 logs.go:282] 0 containers: []
	W0414 14:11:49.338700 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:49.338712 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:49.338725 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:49.419130 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:49.419177 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:49.465043 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:49.465084 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:49.523987 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:49.524024 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:49.540895 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:49.540927 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:49.625084 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:52.126235 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:52.144393 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:52.144463 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:52.183977 2235858 cri.go:89] found id: ""
	I0414 14:11:52.184006 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.184016 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:52.184024 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:52.184083 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:52.224658 2235858 cri.go:89] found id: ""
	I0414 14:11:52.224691 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.224703 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:52.224710 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:52.224783 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:52.260786 2235858 cri.go:89] found id: ""
	I0414 14:11:52.260818 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.260830 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:52.260838 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:52.260900 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:52.299065 2235858 cri.go:89] found id: ""
	I0414 14:11:52.299097 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.299108 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:52.299117 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:52.299174 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:52.344610 2235858 cri.go:89] found id: ""
	I0414 14:11:52.344644 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.344656 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:52.344664 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:52.344748 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:52.385919 2235858 cri.go:89] found id: ""
	I0414 14:11:52.385946 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.385957 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:52.386010 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:52.386094 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:52.429507 2235858 cri.go:89] found id: ""
	I0414 14:11:52.429537 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.429548 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:52.429557 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:52.429614 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:52.469992 2235858 cri.go:89] found id: ""
	I0414 14:11:52.470016 2235858 logs.go:282] 0 containers: []
	W0414 14:11:52.470025 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:52.470038 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:52.470059 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:52.484012 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:52.484050 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:52.580567 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:52.580596 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:52.580615 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:52.664385 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:52.664429 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:52.713251 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:52.713283 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:55.272842 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:55.288339 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:55.288420 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:55.328459 2235858 cri.go:89] found id: ""
	I0414 14:11:55.328495 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.328507 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:55.328515 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:55.328602 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:55.374865 2235858 cri.go:89] found id: ""
	I0414 14:11:55.374897 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.374907 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:55.374915 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:55.374967 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:55.420171 2235858 cri.go:89] found id: ""
	I0414 14:11:55.420210 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.420220 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:55.420228 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:55.420304 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:55.465769 2235858 cri.go:89] found id: ""
	I0414 14:11:55.465808 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.465821 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:55.465831 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:55.465912 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:55.503955 2235858 cri.go:89] found id: ""
	I0414 14:11:55.503985 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.503994 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:55.504000 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:55.504074 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:55.551797 2235858 cri.go:89] found id: ""
	I0414 14:11:55.551832 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.551846 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:55.551855 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:55.551933 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:55.604343 2235858 cri.go:89] found id: ""
	I0414 14:11:55.604378 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.604390 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:55.604399 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:55.604468 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:55.664874 2235858 cri.go:89] found id: ""
	I0414 14:11:55.664917 2235858 logs.go:282] 0 containers: []
	W0414 14:11:55.664930 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:55.664944 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:55.664960 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:55.743232 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:55.743284 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:55.761092 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:55.761126 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:55.841243 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:55.841288 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:55.841308 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:55.927800 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:55.927843 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:11:58.477177 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:11:58.491423 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:11:58.491507 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:11:58.533252 2235858 cri.go:89] found id: ""
	I0414 14:11:58.533288 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.533301 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:11:58.533310 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:11:58.533385 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:11:58.577463 2235858 cri.go:89] found id: ""
	I0414 14:11:58.577498 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.577509 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:11:58.577518 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:11:58.577583 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:11:58.628857 2235858 cri.go:89] found id: ""
	I0414 14:11:58.628893 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.628905 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:11:58.628913 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:11:58.628991 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:11:58.666376 2235858 cri.go:89] found id: ""
	I0414 14:11:58.666414 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.666426 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:11:58.666433 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:11:58.666501 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:11:58.714421 2235858 cri.go:89] found id: ""
	I0414 14:11:58.714461 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.714473 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:11:58.714481 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:11:58.714552 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:11:58.760410 2235858 cri.go:89] found id: ""
	I0414 14:11:58.760446 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.760457 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:11:58.760466 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:11:58.760535 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:11:58.793299 2235858 cri.go:89] found id: ""
	I0414 14:11:58.793328 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.793339 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:11:58.793348 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:11:58.793409 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:11:58.834113 2235858 cri.go:89] found id: ""
	I0414 14:11:58.834165 2235858 logs.go:282] 0 containers: []
	W0414 14:11:58.834176 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:11:58.834189 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:11:58.834206 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:11:58.914962 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:11:58.915013 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:11:58.932294 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:11:58.932340 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:11:59.011418 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:11:59.011445 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:11:59.011462 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:11:59.122351 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:11:59.122400 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:01.672140 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:01.691734 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:01.691822 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:01.743103 2235858 cri.go:89] found id: ""
	I0414 14:12:01.743139 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.743151 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:01.743159 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:01.743222 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:01.789694 2235858 cri.go:89] found id: ""
	I0414 14:12:01.789731 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.789744 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:01.789776 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:01.789852 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:01.834478 2235858 cri.go:89] found id: ""
	I0414 14:12:01.834514 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.834527 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:01.834535 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:01.834605 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:01.874716 2235858 cri.go:89] found id: ""
	I0414 14:12:01.874747 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.874755 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:01.874761 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:01.874826 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:01.917032 2235858 cri.go:89] found id: ""
	I0414 14:12:01.917064 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.917072 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:01.917078 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:01.917134 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:01.953483 2235858 cri.go:89] found id: ""
	I0414 14:12:01.953514 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.953526 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:01.953534 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:01.953608 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:01.990999 2235858 cri.go:89] found id: ""
	I0414 14:12:01.991032 2235858 logs.go:282] 0 containers: []
	W0414 14:12:01.991041 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:01.991047 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:01.991111 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:02.025965 2235858 cri.go:89] found id: ""
	I0414 14:12:02.026004 2235858 logs.go:282] 0 containers: []
	W0414 14:12:02.026016 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:02.026028 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:02.026044 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:02.079215 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:02.079259 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:02.093095 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:02.093132 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:02.171473 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:02.171496 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:02.171510 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:02.261435 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:02.261487 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:04.808004 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:04.823764 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:04.823854 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:04.867113 2235858 cri.go:89] found id: ""
	I0414 14:12:04.867149 2235858 logs.go:282] 0 containers: []
	W0414 14:12:04.867162 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:04.867170 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:04.867241 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:04.905098 2235858 cri.go:89] found id: ""
	I0414 14:12:04.905131 2235858 logs.go:282] 0 containers: []
	W0414 14:12:04.905146 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:04.905155 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:04.905231 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:04.941678 2235858 cri.go:89] found id: ""
	I0414 14:12:04.941721 2235858 logs.go:282] 0 containers: []
	W0414 14:12:04.941734 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:04.941742 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:04.941809 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:04.978846 2235858 cri.go:89] found id: ""
	I0414 14:12:04.978883 2235858 logs.go:282] 0 containers: []
	W0414 14:12:04.978895 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:04.978904 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:04.978985 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:05.022825 2235858 cri.go:89] found id: ""
	I0414 14:12:05.022857 2235858 logs.go:282] 0 containers: []
	W0414 14:12:05.022868 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:05.022877 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:05.022942 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:05.060150 2235858 cri.go:89] found id: ""
	I0414 14:12:05.060185 2235858 logs.go:282] 0 containers: []
	W0414 14:12:05.060198 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:05.060208 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:05.060288 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:05.140515 2235858 cri.go:89] found id: ""
	I0414 14:12:05.140568 2235858 logs.go:282] 0 containers: []
	W0414 14:12:05.140590 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:05.140599 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:05.140674 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:05.184014 2235858 cri.go:89] found id: ""
	I0414 14:12:05.184049 2235858 logs.go:282] 0 containers: []
	W0414 14:12:05.184058 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:05.184071 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:05.184105 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:05.262900 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:05.262966 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:05.278508 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:05.278554 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:05.360447 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:05.360482 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:05.360501 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:05.480661 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:05.480742 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:08.039172 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:08.054755 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:08.054825 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:08.095685 2235858 cri.go:89] found id: ""
	I0414 14:12:08.095728 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.095741 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:08.095749 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:08.095826 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:08.132889 2235858 cri.go:89] found id: ""
	I0414 14:12:08.132930 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.132943 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:08.132952 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:08.133037 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:08.171679 2235858 cri.go:89] found id: ""
	I0414 14:12:08.171720 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.171732 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:08.171739 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:08.171800 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:08.210607 2235858 cri.go:89] found id: ""
	I0414 14:12:08.210643 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.210656 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:08.210665 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:08.210733 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:08.248818 2235858 cri.go:89] found id: ""
	I0414 14:12:08.248857 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.248870 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:08.248879 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:08.249000 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:08.286248 2235858 cri.go:89] found id: ""
	I0414 14:12:08.286282 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.286293 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:08.286301 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:08.286369 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:08.329620 2235858 cri.go:89] found id: ""
	I0414 14:12:08.329660 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.329672 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:08.329681 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:08.329758 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:08.373729 2235858 cri.go:89] found id: ""
	I0414 14:12:08.373765 2235858 logs.go:282] 0 containers: []
	W0414 14:12:08.373778 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:08.373793 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:08.373810 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:08.458376 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:08.458422 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:08.515646 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:08.515694 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:08.576427 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:08.576472 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:08.591141 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:08.591181 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:08.672029 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:11.172897 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:11.186226 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:11.186297 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:11.219623 2235858 cri.go:89] found id: ""
	I0414 14:12:11.219660 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.219672 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:11.219680 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:11.219748 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:11.256004 2235858 cri.go:89] found id: ""
	I0414 14:12:11.256035 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.256044 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:11.256050 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:11.256103 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:11.288756 2235858 cri.go:89] found id: ""
	I0414 14:12:11.288787 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.288798 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:11.288806 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:11.288856 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:11.325556 2235858 cri.go:89] found id: ""
	I0414 14:12:11.325585 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.325595 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:11.325603 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:11.325670 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:11.366662 2235858 cri.go:89] found id: ""
	I0414 14:12:11.366691 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.366699 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:11.366705 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:11.366758 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:11.403276 2235858 cri.go:89] found id: ""
	I0414 14:12:11.403309 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.403317 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:11.403322 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:11.403375 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:11.438344 2235858 cri.go:89] found id: ""
	I0414 14:12:11.438375 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.438383 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:11.438389 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:11.438440 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:11.474191 2235858 cri.go:89] found id: ""
	I0414 14:12:11.474226 2235858 logs.go:282] 0 containers: []
	W0414 14:12:11.474238 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:11.474251 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:11.474268 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:11.549861 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:11.549902 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:11.602991 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:11.603030 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:11.654308 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:11.654349 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:11.668220 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:11.668259 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:11.749291 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:14.249792 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:14.265481 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:14.265575 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:14.305212 2235858 cri.go:89] found id: ""
	I0414 14:12:14.305245 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.305254 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:14.305263 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:14.305327 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:14.348614 2235858 cri.go:89] found id: ""
	I0414 14:12:14.348651 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.348664 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:14.348671 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:14.348756 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:14.387240 2235858 cri.go:89] found id: ""
	I0414 14:12:14.387276 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.387288 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:14.387296 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:14.387354 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:14.424000 2235858 cri.go:89] found id: ""
	I0414 14:12:14.424038 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.424053 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:14.424063 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:14.424131 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:14.460783 2235858 cri.go:89] found id: ""
	I0414 14:12:14.460817 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.460826 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:14.460832 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:14.460901 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:14.497499 2235858 cri.go:89] found id: ""
	I0414 14:12:14.497551 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.497564 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:14.497573 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:14.497642 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:14.542090 2235858 cri.go:89] found id: ""
	I0414 14:12:14.542123 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.542136 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:14.542145 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:14.542230 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:14.580719 2235858 cri.go:89] found id: ""
	I0414 14:12:14.580766 2235858 logs.go:282] 0 containers: []
	W0414 14:12:14.580778 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:14.580790 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:14.580807 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:14.638232 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:14.638273 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:14.654623 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:14.654652 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:14.733187 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:14.733219 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:14.733236 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:14.821389 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:14.821438 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:17.371778 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:17.387204 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:17.387295 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:17.427341 2235858 cri.go:89] found id: ""
	I0414 14:12:17.427373 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.427381 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:17.427387 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:17.427441 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:17.468668 2235858 cri.go:89] found id: ""
	I0414 14:12:17.468696 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.468703 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:17.468709 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:17.468786 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:17.508898 2235858 cri.go:89] found id: ""
	I0414 14:12:17.508927 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.508935 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:17.508941 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:17.508994 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:17.547491 2235858 cri.go:89] found id: ""
	I0414 14:12:17.547520 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.547528 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:17.547535 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:17.547593 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:17.608441 2235858 cri.go:89] found id: ""
	I0414 14:12:17.608480 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.608493 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:17.608502 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:17.608574 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:17.643436 2235858 cri.go:89] found id: ""
	I0414 14:12:17.643473 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.643484 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:17.643492 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:17.643556 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:17.678112 2235858 cri.go:89] found id: ""
	I0414 14:12:17.678152 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.678162 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:17.678169 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:17.678241 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:17.722245 2235858 cri.go:89] found id: ""
	I0414 14:12:17.722284 2235858 logs.go:282] 0 containers: []
	W0414 14:12:17.722296 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:17.722311 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:17.722328 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:17.772372 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:17.772417 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:17.786069 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:17.786104 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:17.857851 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:17.857885 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:17.857904 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:17.935439 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:17.935485 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:20.478410 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:20.498236 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:20.498326 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:20.549928 2235858 cri.go:89] found id: ""
	I0414 14:12:20.549979 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.549991 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:20.549998 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:20.550068 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:20.589392 2235858 cri.go:89] found id: ""
	I0414 14:12:20.589427 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.589439 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:20.589447 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:20.589521 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:20.636134 2235858 cri.go:89] found id: ""
	I0414 14:12:20.636172 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.636184 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:20.636199 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:20.636266 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:20.687781 2235858 cri.go:89] found id: ""
	I0414 14:12:20.687816 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.687828 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:20.687835 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:20.687912 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:20.730283 2235858 cri.go:89] found id: ""
	I0414 14:12:20.730317 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.730328 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:20.730336 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:20.730406 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:20.773864 2235858 cri.go:89] found id: ""
	I0414 14:12:20.773899 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.773911 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:20.773920 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:20.773993 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:20.834265 2235858 cri.go:89] found id: ""
	I0414 14:12:20.834308 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.834321 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:20.834338 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:20.834391 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:20.884053 2235858 cri.go:89] found id: ""
	I0414 14:12:20.884086 2235858 logs.go:282] 0 containers: []
	W0414 14:12:20.884105 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:20.884119 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:20.884135 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:20.965484 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:20.965537 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:20.985304 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:20.985349 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:21.077177 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:21.077203 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:21.077219 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:21.160644 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:21.160684 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:23.711813 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:23.727727 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:12:23.727813 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:12:23.775446 2235858 cri.go:89] found id: ""
	I0414 14:12:23.775480 2235858 logs.go:282] 0 containers: []
	W0414 14:12:23.775491 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:12:23.775499 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:12:23.775568 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:12:23.823293 2235858 cri.go:89] found id: ""
	I0414 14:12:23.823324 2235858 logs.go:282] 0 containers: []
	W0414 14:12:23.823334 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:12:23.823341 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:12:23.823408 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:12:23.867525 2235858 cri.go:89] found id: ""
	I0414 14:12:23.867558 2235858 logs.go:282] 0 containers: []
	W0414 14:12:23.867568 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:12:23.867576 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:12:23.867638 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:12:23.919651 2235858 cri.go:89] found id: ""
	I0414 14:12:23.919690 2235858 logs.go:282] 0 containers: []
	W0414 14:12:23.919702 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:12:23.919711 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:12:23.919783 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:12:23.969603 2235858 cri.go:89] found id: ""
	I0414 14:12:23.969642 2235858 logs.go:282] 0 containers: []
	W0414 14:12:23.969653 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:12:23.969663 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:12:23.969739 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:12:24.006732 2235858 cri.go:89] found id: ""
	I0414 14:12:24.006762 2235858 logs.go:282] 0 containers: []
	W0414 14:12:24.006772 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:12:24.006780 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:12:24.006848 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:12:24.049286 2235858 cri.go:89] found id: ""
	I0414 14:12:24.049315 2235858 logs.go:282] 0 containers: []
	W0414 14:12:24.049324 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:12:24.049330 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:12:24.049394 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:12:24.094604 2235858 cri.go:89] found id: ""
	I0414 14:12:24.094638 2235858 logs.go:282] 0 containers: []
	W0414 14:12:24.094650 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:12:24.094665 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:12:24.094681 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:12:24.175478 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:12:24.175514 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:12:24.175532 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:12:24.270553 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:12:24.270609 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 14:12:24.336081 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:12:24.336123 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:12:24.397053 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:12:24.397099 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:12:26.916226 2235858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:12:26.933086 2235858 kubeadm.go:597] duration metric: took 4m2.800445933s to restartPrimaryControlPlane
	W0414 14:12:26.933166 2235858 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 14:12:26.933193 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 14:12:28.663012 2235858 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.729792535s)
	I0414 14:12:28.663119 2235858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:12:28.680244 2235858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:12:28.691772 2235858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:12:28.705234 2235858 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:12:28.705262 2235858 kubeadm.go:157] found existing configuration files:
	
	I0414 14:12:28.705318 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:12:28.717793 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:12:28.717864 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:12:28.728165 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:12:28.737876 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:12:28.737949 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:12:28.749001 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:12:28.758559 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:12:28.758628 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:12:28.768592 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:12:28.778483 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:12:28.778548 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:12:28.788856 2235858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:12:28.862247 2235858 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:12:28.862351 2235858 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:12:29.005480 2235858 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:12:29.005641 2235858 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:12:29.005792 2235858 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:12:29.203920 2235858 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:12:29.381147 2235858 out.go:235]   - Generating certificates and keys ...
	I0414 14:12:29.381286 2235858 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:12:29.381376 2235858 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:12:29.381483 2235858 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 14:12:29.381573 2235858 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 14:12:29.381670 2235858 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 14:12:29.381749 2235858 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 14:12:29.381852 2235858 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 14:12:29.381948 2235858 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 14:12:29.382115 2235858 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 14:12:29.382246 2235858 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 14:12:29.382304 2235858 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 14:12:29.382390 2235858 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:12:29.382483 2235858 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:12:29.457480 2235858 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:12:29.714883 2235858 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:12:29.908686 2235858 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:12:29.927903 2235858 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:12:29.929758 2235858 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:12:29.929921 2235858 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:12:30.097033 2235858 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:12:30.222071 2235858 out.go:235]   - Booting up control plane ...
	I0414 14:12:30.222229 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:12:30.222329 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:12:30.222406 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:12:30.222515 2235858 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:12:30.222725 2235858 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:13:10.139419 2235858 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:13:10.140459 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:13:10.140678 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:13:15.140994 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:13:15.141277 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:13:25.141712 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:13:25.142020 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:13:45.142785 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:13:45.143020 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:14:25.145596 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:14:25.145889 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:14:25.145927 2235858 kubeadm.go:310] 
	I0414 14:14:25.145979 2235858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:14:25.146036 2235858 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:14:25.146046 2235858 kubeadm.go:310] 
	I0414 14:14:25.146077 2235858 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:14:25.146105 2235858 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:14:25.146192 2235858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:14:25.146199 2235858 kubeadm.go:310] 
	I0414 14:14:25.146279 2235858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:14:25.146307 2235858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:14:25.146333 2235858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:14:25.146341 2235858 kubeadm.go:310] 
	I0414 14:14:25.146436 2235858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:14:25.146511 2235858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:14:25.146515 2235858 kubeadm.go:310] 
	I0414 14:14:25.146627 2235858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:14:25.146733 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:14:25.146821 2235858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:14:25.146908 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:14:25.146917 2235858 kubeadm.go:310] 
	I0414 14:14:25.148249 2235858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:14:25.148386 2235858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:14:25.148481 2235858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 14:14:25.148652 2235858 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 14:14:25.148707 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 14:14:27.187307 2235858 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.038572901s)
	I0414 14:14:27.187384 2235858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:14:27.207446 2235858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:14:27.218408 2235858 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:14:27.218430 2235858 kubeadm.go:157] found existing configuration files:
	
	I0414 14:14:27.218473 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:14:27.228430 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:14:27.228495 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:14:27.241436 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:14:27.254339 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:14:27.254402 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:14:27.268293 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:14:27.285409 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:14:27.285471 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:14:27.299376 2235858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:14:27.310548 2235858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:14:27.310619 2235858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:14:27.321496 2235858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:14:27.404964 2235858 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 14:14:27.405091 2235858 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:14:27.572510 2235858 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:14:27.572661 2235858 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:14:27.572859 2235858 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 14:14:27.771173 2235858 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:14:27.773050 2235858 out.go:235]   - Generating certificates and keys ...
	I0414 14:14:27.773159 2235858 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:14:27.773240 2235858 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:14:27.773364 2235858 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 14:14:27.773430 2235858 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 14:14:27.773487 2235858 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 14:14:27.773589 2235858 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 14:14:27.774295 2235858 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 14:14:27.774601 2235858 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 14:14:27.775343 2235858 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 14:14:27.775671 2235858 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 14:14:27.775968 2235858 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 14:14:27.776051 2235858 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:14:28.079806 2235858 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:14:28.612779 2235858 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:14:28.685510 2235858 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:14:28.858440 2235858 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:14:28.879441 2235858 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:14:28.881126 2235858 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:14:28.881215 2235858 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:14:29.092061 2235858 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:14:29.093572 2235858 out.go:235]   - Booting up control plane ...
	I0414 14:14:29.093711 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:14:29.105869 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:14:29.110749 2235858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:14:29.113380 2235858 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:14:29.125807 2235858 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 14:15:09.128360 2235858 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 14:15:09.128471 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:09.128674 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:14.128914 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:14.129194 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:24.130053 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:24.130278 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:44.130732 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:44.130993 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:16:24.130339 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:16:24.130631 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:16:24.130653 2235858 kubeadm.go:310] 
	I0414 14:16:24.130704 2235858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:16:24.130779 2235858 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:16:24.130797 2235858 kubeadm.go:310] 
	I0414 14:16:24.130844 2235858 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:16:24.130904 2235858 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:16:24.131056 2235858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:16:24.131075 2235858 kubeadm.go:310] 
	I0414 14:16:24.131212 2235858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:16:24.131254 2235858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:16:24.131293 2235858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:16:24.131299 2235858 kubeadm.go:310] 
	I0414 14:16:24.131421 2235858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:16:24.131520 2235858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:16:24.131528 2235858 kubeadm.go:310] 
	I0414 14:16:24.131660 2235858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:16:24.131767 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:16:24.131853 2235858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:16:24.131938 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:16:24.131946 2235858 kubeadm.go:310] 
	I0414 14:16:24.133108 2235858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:24.133245 2235858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:16:24.133343 2235858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:16:24.133446 2235858 kubeadm.go:394] duration metric: took 8m0.052385423s to StartCluster
	I0414 14:16:24.133512 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:16:24.133587 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:16:24.199915 2235858 cri.go:89] found id: ""
	I0414 14:16:24.199946 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.199956 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:16:24.199965 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:16:24.200032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:16:24.247368 2235858 cri.go:89] found id: ""
	I0414 14:16:24.247407 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.247418 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:16:24.247427 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:16:24.247496 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:16:24.288565 2235858 cri.go:89] found id: ""
	I0414 14:16:24.288598 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.288610 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:16:24.288618 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:16:24.288687 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:16:24.329531 2235858 cri.go:89] found id: ""
	I0414 14:16:24.329568 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.329581 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:16:24.329591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:16:24.329663 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:16:24.372326 2235858 cri.go:89] found id: ""
	I0414 14:16:24.372361 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.372370 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:16:24.372376 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:16:24.372447 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:16:24.423414 2235858 cri.go:89] found id: ""
	I0414 14:16:24.423447 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.423460 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:16:24.423469 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:16:24.423534 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:16:24.464828 2235858 cri.go:89] found id: ""
	I0414 14:16:24.464869 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.464882 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:16:24.464890 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:16:24.464970 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:16:24.505791 2235858 cri.go:89] found id: ""
	I0414 14:16:24.505820 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.505830 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:16:24.505844 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:16:24.505860 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:16:24.571908 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:16:24.571951 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:16:24.589579 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:16:24.589614 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:16:24.680606 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:16:24.680637 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:16:24.680659 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:16:24.800813 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:16:24.800859 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 14:16:24.849704 2235858 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:16:24.849777 2235858 out.go:270] * 
	* 
	W0414 14:16:24.849842 2235858 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.849868 2235858 out.go:270] * 
	* 
	W0414 14:16:24.851036 2235858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:16:24.854829 2235858 out.go:201] 
	W0414 14:16:24.856198 2235858 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.856246 2235858 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:16:24.856269 2235858 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:16:24.857740 2235858 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-954411 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (259.34771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-954411 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-793608 sudo iptables                       | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo docker                         | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo cat                            | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo                                | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo find                           | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-793608 sudo crio                           | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-793608                                     | bridge-793608 | jenkins | v1.35.0 | 14 Apr 25 14:15 UTC | 14 Apr 25 14:15 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:15:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:15:31.712686 2246921 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:15:31.712831 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.712841 2246921 out.go:358] Setting ErrFile to fd 2...
	I0414 14:15:31.712845 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.713023 2246921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:15:31.713616 2246921 out.go:352] Setting JSON to false
	I0414 14:15:31.714831 2246921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":169071,"bootTime":1744471061,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:15:31.714947 2246921 start.go:139] virtualization: kvm guest
	I0414 14:15:31.717011 2246921 out.go:177] * [enable-default-cni-793608] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:15:31.718463 2246921 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:15:31.718471 2246921 notify.go:220] Checking for updates...
	I0414 14:15:31.720654 2246921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:15:31.721764 2246921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:31.722980 2246921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:31.724178 2246921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:15:31.725315 2246921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:15:31.727113 2246921 config.go:182] Loaded profile config "bridge-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727265 2246921 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727430 2246921 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:15:31.727563 2246921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:15:31.767165 2246921 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:15:31.768293 2246921 start.go:297] selected driver: kvm2
	I0414 14:15:31.768305 2246921 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:15:31.768317 2246921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:15:31.769036 2246921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.769109 2246921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:15:31.784672 2246921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:15:31.784720 2246921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0414 14:15:31.784990 2246921 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0414 14:15:31.785021 2246921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:15:31.785052 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:15:31.785058 2246921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:15:31.785117 2246921 start.go:340] cluster config:
	{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:31.785199 2246921 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.786961 2246921 out.go:177] * Starting "enable-default-cni-793608" primary control-plane node in "enable-default-cni-793608" cluster
	I0414 14:15:29.994679 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:29.995209 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find current IP address of domain flannel-793608 in network mk-flannel-793608
	I0414 14:15:29.995234 2245195 main.go:141] libmachine: (flannel-793608) DBG | I0414 14:15:29.995171 2245218 retry.go:31] will retry after 4.26066759s: waiting for domain to come up
	I0414 14:15:34.260693 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261277 2245195 main.go:141] libmachine: (flannel-793608) found domain IP: 192.168.72.179
	I0414 14:15:34.261303 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has current primary IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261308 2245195 main.go:141] libmachine: (flannel-793608) reserving static IP address...
	I0414 14:15:34.261716 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find host DHCP lease matching {name: "flannel-793608", mac: "52:54:00:62:9d:72", ip: "192.168.72.179"} in network mk-flannel-793608
	I0414 14:15:34.346350 2245195 main.go:141] libmachine: (flannel-793608) reserved static IP address 192.168.72.179 for domain flannel-793608
	I0414 14:15:34.346390 2245195 main.go:141] libmachine: (flannel-793608) waiting for SSH...
	I0414 14:15:34.346401 2245195 main.go:141] libmachine: (flannel-793608) DBG | Getting to WaitForSSH function...
	I0414 14:15:34.349135 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.349868 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.349899 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.350052 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH client type: external
	I0414 14:15:34.350078 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa (-rw-------)
	I0414 14:15:34.350116 2245195 main.go:141] libmachine: (flannel-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:15:34.350131 2245195 main.go:141] libmachine: (flannel-793608) DBG | About to run SSH command:
	I0414 14:15:34.350150 2245195 main.go:141] libmachine: (flannel-793608) DBG | exit 0
	I0414 14:15:34.484913 2245195 main.go:141] libmachine: (flannel-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:15:34.485227 2245195 main.go:141] libmachine: (flannel-793608) KVM machine creation complete
	I0414 14:15:34.485544 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:34.486221 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486401 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486553 2245195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:15:34.486568 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:34.487978 2245195 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:15:34.487993 2245195 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:15:34.488000 2245195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:15:34.488008 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.490564 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.490891 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.490935 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.491086 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.491262 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491420 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491570 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.491735 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.491982 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.491998 2245195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:15:34.604245 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.604277 2245195 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:15:34.604289 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.606969 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607364 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.607394 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607479 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.607712 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.607871 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.608010 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.608176 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.608423 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.608435 2245195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:15:31.788237 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:31.788265 2246921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:15:31.788272 2246921 cache.go:56] Caching tarball of preloaded images
	I0414 14:15:31.788346 2246921 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:15:31.788355 2246921 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:15:31.788446 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:15:31.788463 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json: {Name:mkf77fb616cb68a05b6b927a1d1b666f496a2e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:31.788580 2246921 start.go:360] acquireMachinesLock for enable-default-cni-793608: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:15:35.885793 2246921 start.go:364] duration metric: took 4.097174218s to acquireMachinesLock for "enable-default-cni-793608"
	I0414 14:15:35.885866 2246921 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:35.886064 2246921 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:15:35.888060 2246921 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 14:15:35.888295 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:35.888367 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:35.906793 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0414 14:15:35.907218 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:35.907761 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:15:35.907787 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:35.908162 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:35.908377 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:15:35.908506 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:15:35.908667 2246921 start.go:159] libmachine.API.Create for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:15:35.908702 2246921 client.go:168] LocalClient.Create starting
	I0414 14:15:35.908763 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:15:35.908804 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908828 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.908911 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:15:35.908946 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908967 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.909001 2246921 main.go:141] libmachine: Running pre-create checks...
	I0414 14:15:35.909014 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .PreCreateCheck
	I0414 14:15:35.909444 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:15:35.909876 2246921 main.go:141] libmachine: Creating machine...
	I0414 14:15:35.909891 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Create
	I0414 14:15:35.910047 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating KVM machine...
	I0414 14:15:35.910070 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating network...
	I0414 14:15:35.911361 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found existing default KVM network
	I0414 14:15:35.912285 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912133 2246989 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:97:78} reservation:<nil>}
	I0414 14:15:35.913042 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912966 2246989 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:99:3f} reservation:<nil>}
	I0414 14:15:35.914019 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.913920 2246989 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292a90}
	I0414 14:15:35.914054 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | created network xml: 
	I0414 14:15:35.914078 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | <network>
	I0414 14:15:35.914088 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <name>mk-enable-default-cni-793608</name>
	I0414 14:15:35.914099 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <dns enable='no'/>
	I0414 14:15:35.914108 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914122 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 14:15:35.914132 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     <dhcp>
	I0414 14:15:35.914142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 14:15:35.914154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     </dhcp>
	I0414 14:15:35.914161 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   </ip>
	I0414 14:15:35.914173 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914184 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | </network>
	I0414 14:15:35.914202 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | 
	I0414 14:15:35.919363 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | trying to create private KVM network mk-enable-default-cni-793608 192.168.61.0/24...
	I0414 14:15:36.004404 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | private KVM network mk-enable-default-cni-793608 192.168.61.0/24 created
	I0414 14:15:36.004444 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.004472 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.004346 2246989 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.004493 2246921 main.go:141] libmachine: (enable-default-cni-793608) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:15:36.004516 2246921 main.go:141] libmachine: (enable-default-cni-793608) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:15:36.310781 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.310631 2246989 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa...
	I0414 14:15:36.425010 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.424863 2246989 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk...
	I0414 14:15:36.425050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing magic tar header
	I0414 14:15:36.425064 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing SSH key tar header
	I0414 14:15:36.425072 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.425023 2246989 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.425218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608
	I0414 14:15:36.425260 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 (perms=drwx------)
	I0414 14:15:36.425270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:15:36.425291 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.425304 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:15:36.425315 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:15:36.425326 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins
	I0414 14:15:36.425339 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:15:36.425360 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:15:36.425371 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:15:36.425379 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home
	I0414 14:15:36.425391 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | skipping /home - not owner
	I0414 14:15:36.425401 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:15:36.425420 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:15:36.425433 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:36.426763 2246921 main.go:141] libmachine: (enable-default-cni-793608) define libvirt domain using xml: 
	I0414 14:15:36.426788 2246921 main.go:141] libmachine: (enable-default-cni-793608) <domain type='kvm'>
	I0414 14:15:36.426799 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <name>enable-default-cni-793608</name>
	I0414 14:15:36.426807 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <memory unit='MiB'>3072</memory>
	I0414 14:15:36.426816 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <vcpu>2</vcpu>
	I0414 14:15:36.426832 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <features>
	I0414 14:15:36.426844 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <acpi/>
	I0414 14:15:36.426858 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <apic/>
	I0414 14:15:36.426869 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <pae/>
	I0414 14:15:36.426877 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.426882 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </features>
	I0414 14:15:36.426902 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <cpu mode='host-passthrough'>
	I0414 14:15:36.426909 2246921 main.go:141] libmachine: (enable-default-cni-793608)   
	I0414 14:15:36.426914 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </cpu>
	I0414 14:15:36.426921 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <os>
	I0414 14:15:36.426925 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <type>hvm</type>
	I0414 14:15:36.426963 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='cdrom'/>
	I0414 14:15:36.427000 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='hd'/>
	I0414 14:15:36.427014 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <bootmenu enable='no'/>
	I0414 14:15:36.427038 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </os>
	I0414 14:15:36.427051 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <devices>
	I0414 14:15:36.427067 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='cdrom'>
	I0414 14:15:36.427085 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/boot2docker.iso'/>
	I0414 14:15:36.427097 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hdc' bus='scsi'/>
	I0414 14:15:36.427109 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <readonly/>
	I0414 14:15:36.427119 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427129 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='disk'>
	I0414 14:15:36.427147 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:15:36.427170 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk'/>
	I0414 14:15:36.427181 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hda' bus='virtio'/>
	I0414 14:15:36.427194 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427205 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427218 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='mk-enable-default-cni-793608'/>
	I0414 14:15:36.427231 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427247 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427259 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427267 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='default'/>
	I0414 14:15:36.427279 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427299 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427311 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <serial type='pty'>
	I0414 14:15:36.427326 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target port='0'/>
	I0414 14:15:36.427338 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </serial>
	I0414 14:15:36.427349 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <console type='pty'>
	I0414 14:15:36.427370 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target type='serial' port='0'/>
	I0414 14:15:36.427397 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </console>
	I0414 14:15:36.427410 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <rng model='virtio'>
	I0414 14:15:36.427421 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <backend model='random'>/dev/random</backend>
	I0414 14:15:36.427432 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </rng>
	I0414 14:15:36.427442 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427451 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427464 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </devices>
	I0414 14:15:36.427475 2246921 main.go:141] libmachine: (enable-default-cni-793608) </domain>
	I0414 14:15:36.427488 2246921 main.go:141] libmachine: (enable-default-cni-793608) 
	I0414 14:15:36.431881 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:85:82:bc in network default
	I0414 14:15:36.432649 2246921 main.go:141] libmachine: (enable-default-cni-793608) starting domain...
	I0414 14:15:36.432690 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:36.432705 2246921 main.go:141] libmachine: (enable-default-cni-793608) ensuring networks are active...
	I0414 14:15:36.433501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network default is active
	I0414 14:15:36.433815 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network mk-enable-default-cni-793608 is active
	I0414 14:15:36.434345 2246921 main.go:141] libmachine: (enable-default-cni-793608) getting domain XML...
	I0414 14:15:36.435023 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:34.721833 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:15:34.721950 2245195 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:15:34.721968 2245195 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:15:34.721980 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722264 2245195 buildroot.go:166] provisioning hostname "flannel-793608"
	I0414 14:15:34.722299 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722517 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.725190 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725590 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.725618 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725786 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.725976 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726158 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726304 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.726456 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.726666 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.726685 2245195 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-793608 && echo "flannel-793608" | sudo tee /etc/hostname
	I0414 14:15:34.856671 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-793608
	
	I0414 14:15:34.856706 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.859492 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.859878 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.859918 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.860081 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.860306 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860473 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860626 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.860812 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.861092 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.861118 2245195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:15:34.981989 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.982020 2245195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:15:34.982064 2245195 buildroot.go:174] setting up certificates
	I0414 14:15:34.982083 2245195 provision.go:84] configureAuth start
	I0414 14:15:34.982100 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.982387 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:34.985287 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985634 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.985664 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985812 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.987950 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988286 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.988317 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988471 2245195 provision.go:143] copyHostCerts
	I0414 14:15:34.988524 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:15:34.988534 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:15:34.988599 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:15:34.988693 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:15:34.988701 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:15:34.988724 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:15:34.988819 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:15:34.988834 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:15:34.988863 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:15:34.988910 2245195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.flannel-793608 san=[127.0.0.1 192.168.72.179 flannel-793608 localhost minikube]
	I0414 14:15:35.242680 2245195 provision.go:177] copyRemoteCerts
	I0414 14:15:35.242795 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:15:35.242845 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.246504 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.246882 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.246915 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.247123 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.247346 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.247546 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.247691 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.335122 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:15:35.359746 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 14:15:35.383458 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:15:35.406827 2245195 provision.go:87] duration metric: took 424.726599ms to configureAuth
	I0414 14:15:35.406858 2245195 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:15:35.407035 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:35.407113 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.409975 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410322 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.410352 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410487 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.410685 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410854 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410996 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.411145 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.411363 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.411378 2245195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:15:35.634723 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:15:35.634754 2245195 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:15:35.634762 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetURL
	I0414 14:15:35.636108 2245195 main.go:141] libmachine: (flannel-793608) DBG | using libvirt version 6000000
	I0414 14:15:35.638402 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638738 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.638770 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638978 2245195 main.go:141] libmachine: Docker is up and running!
	I0414 14:15:35.638990 2245195 main.go:141] libmachine: Reticulating splines...
	I0414 14:15:35.638999 2245195 client.go:171] duration metric: took 25.896323518s to LocalClient.Create
	I0414 14:15:35.639031 2245195 start.go:167] duration metric: took 25.896405712s to libmachine.API.Create "flannel-793608"
	I0414 14:15:35.639044 2245195 start.go:293] postStartSetup for "flannel-793608" (driver="kvm2")
	I0414 14:15:35.639058 2245195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:15:35.639082 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.639326 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:15:35.639354 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.641386 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641767 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.641796 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641940 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.642082 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.642270 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.642382 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.727765 2245195 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:15:35.732019 2245195 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:15:35.732052 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:15:35.732122 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:15:35.732246 2245195 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:15:35.732379 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:15:35.742061 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:35.766562 2245195 start.go:296] duration metric: took 127.496422ms for postStartSetup
	I0414 14:15:35.766624 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:35.767287 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.770180 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770527 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.770556 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770795 2245195 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/config.json ...
	I0414 14:15:35.771009 2245195 start.go:128] duration metric: took 26.050328808s to createHost
	I0414 14:15:35.771033 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.773350 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773680 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.773709 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773847 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.774059 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774197 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774332 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.774490 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.774772 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.774784 2245195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:15:35.885598 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640135.860958804
	
	I0414 14:15:35.885628 2245195 fix.go:216] guest clock: 1744640135.860958804
	I0414 14:15:35.885639 2245195 fix.go:229] Guest: 2025-04-14 14:15:35.860958804 +0000 UTC Remote: 2025-04-14 14:15:35.771023131 +0000 UTC m=+26.173579221 (delta=89.935673ms)
	I0414 14:15:35.885673 2245195 fix.go:200] guest clock delta is within tolerance: 89.935673ms
	I0414 14:15:35.885683 2245195 start.go:83] releasing machines lock for "flannel-793608", held for 26.165125753s
	I0414 14:15:35.885713 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.886039 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.889061 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889425 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.889466 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889637 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890211 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890425 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890536 2245195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:15:35.890579 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.890691 2245195 ssh_runner.go:195] Run: cat /version.json
	I0414 14:15:35.890721 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.893586 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893869 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893934 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.893981 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894247 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894384 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.894411 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894457 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894574 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894629 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.894739 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894808 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.894924 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.895057 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.982011 2245195 ssh_runner.go:195] Run: systemctl --version
	I0414 14:15:36.008338 2245195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:15:36.168391 2245195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:15:36.174476 2245195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:15:36.174551 2245195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:15:36.191051 2245195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:15:36.191080 2245195 start.go:495] detecting cgroup driver to use...
	I0414 14:15:36.191168 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:15:36.209096 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:15:36.223881 2245195 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:15:36.223954 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:15:36.239607 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:15:36.254647 2245195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:15:36.382628 2245195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:15:36.567479 2245195 docker.go:233] disabling docker service ...
	I0414 14:15:36.567573 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:15:36.583824 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:15:36.597712 2245195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:15:36.773681 2245195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:15:36.916917 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:15:36.935946 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:15:36.958970 2245195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:15:36.959024 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.972811 2245195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:15:36.972871 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.988108 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.003343 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.018161 2245195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:15:37.030406 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.043236 2245195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.064170 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.080502 2245195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:15:37.094496 2245195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:15:37.094554 2245195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:15:37.109299 2245195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:15:37.120177 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:37.270593 2245195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:15:37.363308 2245195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:15:37.363395 2245195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:15:37.368889 2245195 start.go:563] Will wait 60s for crictl version
	I0414 14:15:37.368989 2245195 ssh_runner.go:195] Run: which crictl
	I0414 14:15:37.373260 2245195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:15:37.419353 2245195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:15:37.419459 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.452713 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.488597 2245195 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:37.489796 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:37.493160 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.493715 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:37.493740 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.494018 2245195 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 14:15:37.499012 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:37.512911 2245195 kubeadm.go:883] updating cluster {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:15:37.513053 2245195 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:37.513119 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:37.548903 2245195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:15:37.548981 2245195 ssh_runner.go:195] Run: which lz4
	I0414 14:15:37.553268 2245195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:15:37.557856 2245195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:15:37.557890 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:15:39.123096 2245195 crio.go:462] duration metric: took 1.569856354s to copy over tarball
	I0414 14:15:39.123200 2245195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:15:37.970496 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for IP...
	I0414 14:15:37.971657 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:37.972252 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:37.972347 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:37.972267 2246989 retry.go:31] will retry after 263.370551ms: waiting for domain to come up
	I0414 14:15:38.238079 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.238915 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.238941 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.238830 2246989 retry.go:31] will retry after 385.607481ms: waiting for domain to come up
	I0414 14:15:38.626321 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.627021 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.627050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.626998 2246989 retry.go:31] will retry after 445.201612ms: waiting for domain to come up
	I0414 14:15:39.073922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.074637 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.074669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.074614 2246989 retry.go:31] will retry after 401.280526ms: waiting for domain to come up
	I0414 14:15:39.477622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.478402 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.478431 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.478359 2246989 retry.go:31] will retry after 525.224065ms: waiting for domain to come up
	I0414 14:15:40.005081 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.005652 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.005679 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.005612 2246989 retry.go:31] will retry after 886.00622ms: waiting for domain to come up
	I0414 14:15:40.893950 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.894495 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.894532 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.894465 2246989 retry.go:31] will retry after 854.182582ms: waiting for domain to come up
	I0414 14:15:41.493709 2245195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.370463717s)
	I0414 14:15:41.493748 2245195 crio.go:469] duration metric: took 2.370608674s to extract the tarball
	I0414 14:15:41.493759 2245195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:15:41.535292 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:41.588898 2245195 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:15:41.588940 2245195 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:15:41.588949 2245195 kubeadm.go:934] updating node { 192.168.72.179 8443 v1.32.2 crio true true} ...
	I0414 14:15:41.589074 2245195 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 14:15:41.589140 2245195 ssh_runner.go:195] Run: crio config
	I0414 14:15:41.654490 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:41.654526 2245195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:15:41.654559 2245195 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.179 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-793608 NodeName:flannel-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:15:41.654767 2245195 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:15:41.654853 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:15:41.665504 2245195 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:15:41.665589 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:15:41.675974 2245195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 14:15:41.694468 2245195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:15:41.712581 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 14:15:41.731194 2245195 ssh_runner.go:195] Run: grep 192.168.72.179	control-plane.minikube.internal$ /etc/hosts
	I0414 14:15:41.735372 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:41.748968 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:41.865867 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:41.886997 2245195 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608 for IP: 192.168.72.179
	I0414 14:15:41.887023 2245195 certs.go:194] generating shared ca certs ...
	I0414 14:15:41.887041 2245195 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:41.887257 2245195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:15:41.887344 2245195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:15:41.887359 2245195 certs.go:256] generating profile certs ...
	I0414 14:15:41.887451 2245195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key
	I0414 14:15:41.887472 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt with IP's: []
	I0414 14:15:42.047090 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt ...
	I0414 14:15:42.047130 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: {Name:mk61725d6c2d598935bcc4ddc3016fd5f2c41ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047361 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key ...
	I0414 14:15:42.047378 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key: {Name:mk34fa3cf8ab863f5f74888d1351e7b4a1a82440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047497 2245195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052
	I0414 14:15:42.047517 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.179]
	I0414 14:15:42.148599 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 ...
	I0414 14:15:42.148638 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052: {Name:mk1db924027905394f8766631f4c71ead06a8ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.148885 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 ...
	I0414 14:15:42.148907 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052: {Name:mkbaf3ac23585ef0764dcb14eee50a6ebe5b28d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.149024 2245195 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt
	I0414 14:15:42.149140 2245195 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key
	I0414 14:15:42.149237 2245195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key
	I0414 14:15:42.149261 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt with IP's: []
	I0414 14:15:42.494187 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt ...
	I0414 14:15:42.494227 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt: {Name:mk0e79a8197af3196f139854e3ee11b8a9027e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494439 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key ...
	I0414 14:15:42.494459 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key: {Name:mk9593886f9fd4b010d5b9a09f833fed6848aae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494757 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:15:42.494818 2245195 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:15:42.494832 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:15:42.494857 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:15:42.494883 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:15:42.494912 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:15:42.494953 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:42.495564 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:15:42.528844 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:15:42.560780 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:15:42.606054 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:15:42.646740 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 14:15:42.680301 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:15:42.711568 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:15:42.740840 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:15:42.771555 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:15:42.807236 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:15:42.835699 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:15:42.863445 2245195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:15:42.883659 2245195 ssh_runner.go:195] Run: openssl version
	I0414 14:15:42.890583 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:15:42.901664 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906367 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906428 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.912610 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:15:42.923894 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:15:42.935385 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940238 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940307 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.946322 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:15:42.960753 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:15:42.973724 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979243 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979299 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.985427 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:15:42.996662 2245195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:15:43.001220 2245195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:15:43.001300 2245195 kubeadm.go:392] StartCluster: {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:43.001402 2245195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:15:43.001459 2245195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:15:43.038601 2245195 cri.go:89] found id: ""
	I0414 14:15:43.038700 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:15:43.049342 2245195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:15:43.059616 2245195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:15:43.070826 2245195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:15:43.070850 2245195 kubeadm.go:157] found existing configuration files:
	
	I0414 14:15:43.070910 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:15:43.081463 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:15:43.081530 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:15:43.091483 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:15:43.103049 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:15:43.103137 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:15:43.113237 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.124160 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:15:43.124230 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.138965 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:15:43.153232 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:15:43.153306 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:15:43.167864 2245195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:15:43.400744 2245195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:15:41.750230 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:41.750751 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:41.750807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:41.750744 2246989 retry.go:31] will retry after 1.224694163s: waiting for domain to come up
	I0414 14:15:42.976809 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:42.977336 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:42.977384 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:42.977328 2246989 retry.go:31] will retry after 1.264920996s: waiting for domain to come up
	I0414 14:15:44.243549 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:44.244159 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:44.244193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:44.244066 2246989 retry.go:31] will retry after 1.517311486s: waiting for domain to come up
	I0414 14:15:45.763600 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:45.764116 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:45.764135 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:45.764091 2246989 retry.go:31] will retry after 1.746471018s: waiting for domain to come up
	I0414 14:15:44.130732 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:44.130993 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:47.511868 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:47.512619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:47.512650 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:47.512522 2246989 retry.go:31] will retry after 3.501788139s: waiting for domain to come up
	I0414 14:15:51.016231 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:51.016805 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:51.016837 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:51.016759 2246989 retry.go:31] will retry after 3.940965891s: waiting for domain to come up
	I0414 14:15:54.321686 2245195 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:15:54.321774 2245195 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:15:54.321884 2245195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:15:54.322091 2245195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:15:54.322219 2245195 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:15:54.322316 2245195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:15:54.323900 2245195 out.go:235]   - Generating certificates and keys ...
	I0414 14:15:54.323989 2245195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:15:54.324068 2245195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:15:54.324163 2245195 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:15:54.324244 2245195 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:15:54.324357 2245195 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:15:54.324444 2245195 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:15:54.324558 2245195 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:15:54.324765 2245195 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.324837 2245195 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:15:54.325003 2245195 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.325062 2245195 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:15:54.325116 2245195 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:15:54.325157 2245195 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:15:54.325240 2245195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:15:54.325297 2245195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:15:54.325361 2245195 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:15:54.325410 2245195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:15:54.325469 2245195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:15:54.325533 2245195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:15:54.325622 2245195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:15:54.325680 2245195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:15:54.326976 2245195 out.go:235]   - Booting up control plane ...
	I0414 14:15:54.327061 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:15:54.327129 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:15:54.327223 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:15:54.327393 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:15:54.327473 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:15:54.327543 2245195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:15:54.327735 2245195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:15:54.327895 2245195 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:15:54.327988 2245195 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78321ms
	I0414 14:15:54.328108 2245195 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:15:54.328217 2245195 kubeadm.go:310] [api-check] The API server is healthy after 5.502171207s
	I0414 14:15:54.328371 2245195 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:15:54.328532 2245195 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:15:54.328601 2245195 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:15:54.328798 2245195 kubeadm.go:310] [mark-control-plane] Marking the node flannel-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:15:54.328865 2245195 kubeadm.go:310] [bootstrap-token] Using token: zu89f8.zeaf2f1xfahm8xki
	I0414 14:15:54.330659 2245195 out.go:235]   - Configuring RBAC rules ...
	I0414 14:15:54.330777 2245195 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:15:54.330853 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:15:54.330999 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:15:54.331151 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:15:54.331343 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:15:54.331475 2245195 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:15:54.331629 2245195 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:15:54.331710 2245195 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:15:54.331776 2245195 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:15:54.331786 2245195 kubeadm.go:310] 
	I0414 14:15:54.331859 2245195 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:15:54.331868 2245195 kubeadm.go:310] 
	I0414 14:15:54.331988 2245195 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:15:54.331996 2245195 kubeadm.go:310] 
	I0414 14:15:54.332023 2245195 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:15:54.332081 2245195 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:15:54.332156 2245195 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:15:54.332174 2245195 kubeadm.go:310] 
	I0414 14:15:54.332254 2245195 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:15:54.332264 2245195 kubeadm.go:310] 
	I0414 14:15:54.332330 2245195 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:15:54.332345 2245195 kubeadm.go:310] 
	I0414 14:15:54.332421 2245195 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:15:54.332536 2245195 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:15:54.332628 2245195 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:15:54.332638 2245195 kubeadm.go:310] 
	I0414 14:15:54.332771 2245195 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:15:54.332848 2245195 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:15:54.332854 2245195 kubeadm.go:310] 
	I0414 14:15:54.332922 2245195 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333010 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:15:54.333034 2245195 kubeadm.go:310] 	--control-plane 
	I0414 14:15:54.333039 2245195 kubeadm.go:310] 
	I0414 14:15:54.333109 2245195 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:15:54.333115 2245195 kubeadm.go:310] 
	I0414 14:15:54.333216 2245195 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333391 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:15:54.333407 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:54.334755 2245195 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 14:15:54.335890 2245195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:15:54.344160 2245195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:15:54.344176 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 14:15:54.374891 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:15:54.962412 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:54.963168 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:54.963191 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:54.963134 2246989 retry.go:31] will retry after 5.168467899s: waiting for domain to come up
	I0414 14:15:54.872301 2245195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:15:54.872398 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:54.872433 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-793608 minikube.k8s.io/updated_at=2025_04_14T14_15_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=flannel-793608 minikube.k8s.io/primary=true
	I0414 14:15:54.889203 2245195 ops.go:34] apiserver oom_adj: -16
	I0414 14:15:55.015715 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:55.515973 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.016052 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.515895 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.015870 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.516553 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.016409 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.516652 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.638498 2245195 kubeadm.go:1113] duration metric: took 3.766167061s to wait for elevateKubeSystemPrivileges
	I0414 14:15:58.638542 2245195 kubeadm.go:394] duration metric: took 15.637248519s to StartCluster
	I0414 14:15:58.638569 2245195 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.638677 2245195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:58.640030 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.640295 2245195 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:58.640313 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:15:58.640376 2245195 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:15:58.640504 2245195 addons.go:69] Setting storage-provisioner=true in profile "flannel-793608"
	I0414 14:15:58.640526 2245195 addons.go:69] Setting default-storageclass=true in profile "flannel-793608"
	I0414 14:15:58.640547 2245195 addons.go:238] Setting addon storage-provisioner=true in "flannel-793608"
	I0414 14:15:58.640550 2245195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-793608"
	I0414 14:15:58.640593 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.640513 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:58.641023 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641041 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641052 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.641080 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.642641 2245195 out.go:177] * Verifying Kubernetes components...
	I0414 14:15:58.644038 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:58.657672 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I0414 14:15:58.657684 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0414 14:15:58.658211 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658255 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658709 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658741 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.659096 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659109 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659278 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.659593 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.659622 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.662886 2245195 addons.go:238] Setting addon default-storageclass=true in "flannel-793608"
	I0414 14:15:58.662943 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.663326 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.663378 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.676384 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0414 14:15:58.677014 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.677627 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.677663 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.678164 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.678390 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.680209 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0414 14:15:58.680777 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.680982 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.681468 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.681494 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.681912 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.682367 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.682406 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.682479 2245195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:15:58.683790 2245195 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:58.683805 2245195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:15:58.683823 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.687182 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.687747 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.687772 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.688014 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.688156 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.688286 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.688424 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.704623 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0414 14:15:58.705030 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.705522 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.705545 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.705873 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.706088 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.707899 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.708169 2245195 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:58.708185 2245195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:15:58.708207 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.711345 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.711798 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.711837 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.712036 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.712219 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.712341 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.712475 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.899648 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:58.899700 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:15:59.086139 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:59.182264 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:59.471260 2245195 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 14:15:59.471370 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.471390 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472005 2245195 node_ready.go:35] waiting up to 15m0s for node "flannel-793608" to be "Ready" ...
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472510 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472520 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.472529 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472837 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.509367 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.509402 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.509711 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.509732 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.509736 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.829452 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829478 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829880 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.829909 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.829920 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829931 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829969 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831466 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831574 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.831592 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.832957 2245195 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 14:16:00.135640 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136179 2246921 main.go:141] libmachine: (enable-default-cni-793608) found domain IP: 192.168.61.51
	I0414 14:16:00.136218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has current primary IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136228 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserving static IP address...
	I0414 14:16:00.136619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-793608", mac: "52:54:00:17:5c:90", ip: "192.168.61.51"} in network mk-enable-default-cni-793608
	I0414 14:16:00.222763 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserved static IP address 192.168.61.51 for domain enable-default-cni-793608
	I0414 14:16:00.222799 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Getting to WaitForSSH function...
	I0414 14:16:00.222807 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for SSH...
	I0414 14:16:00.225129 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225617 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.225648 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225770 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH client type: external
	I0414 14:16:00.225797 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa (-rw-------)
	I0414 14:16:00.225856 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:16:00.225876 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | About to run SSH command:
	I0414 14:16:00.225885 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | exit 0
	I0414 14:16:00.349424 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:16:00.349710 2246921 main.go:141] libmachine: (enable-default-cni-793608) KVM machine creation complete
	I0414 14:16:00.350094 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:00.350758 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.350973 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.351171 2246921 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:16:00.351186 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:00.352474 2246921 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:16:00.352489 2246921 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:16:00.352495 2246921 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:16:00.352501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.354605 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355001 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.355029 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355171 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.355341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355496 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355665 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.355853 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.356079 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.356090 2246921 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:16:00.456380 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.456429 2246921 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:16:00.456438 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.460571 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.461175 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461350 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.461649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461843 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461993 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.462152 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.462352 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.462363 2246921 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:16:00.565817 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:16:00.565933 2246921 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:16:00.565955 2246921 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:16:00.565967 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566215 2246921 buildroot.go:166] provisioning hostname "enable-default-cni-793608"
	I0414 14:16:00.566248 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.569565 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570007 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.570036 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570148 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.570313 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.570830 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.571038 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.571050 2246921 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-793608 && echo "enable-default-cni-793608" | sudo tee /etc/hostname
	I0414 14:16:00.692563 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-793608
	
	I0414 14:16:00.692608 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.695656 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.695992 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.696018 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.696190 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.696382 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696618 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.696827 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.697070 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.697097 2246921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:16:00.806026 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.806057 2246921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:16:00.806076 2246921 buildroot.go:174] setting up certificates
	I0414 14:16:00.806087 2246921 provision.go:84] configureAuth start
	I0414 14:16:00.806096 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.806436 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:00.809322 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.809771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809895 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.812367 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.812771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812939 2246921 provision.go:143] copyHostCerts
	I0414 14:16:00.812997 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:16:00.813016 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:16:00.813075 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:16:00.813177 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:16:00.813185 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:16:00.813204 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:16:00.813273 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:16:00.813281 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:16:00.813298 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:16:00.813356 2246921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-793608 san=[127.0.0.1 192.168.61.51 enable-default-cni-793608 localhost minikube]
	I0414 14:16:00.907159 2246921 provision.go:177] copyRemoteCerts
	I0414 14:16:00.907230 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:16:00.907255 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.909912 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910303 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.910362 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910514 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.910722 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.910890 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.911056 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:00.991599 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:16:01.015103 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:16:01.038414 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:16:01.061551 2246921 provision.go:87] duration metric: took 255.446538ms to configureAuth
	I0414 14:16:01.061589 2246921 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:16:01.061847 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:01.061953 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.064789 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065216 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.065256 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065409 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.065624 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065779 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065922 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.066067 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.066371 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.066394 2246921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:16:01.298967 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:16:01.299001 2246921 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:16:01.299010 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetURL
	I0414 14:16:01.300270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | using libvirt version 6000000
	I0414 14:16:01.302669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.303193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303319 2246921 main.go:141] libmachine: Docker is up and running!
	I0414 14:16:01.303335 2246921 main.go:141] libmachine: Reticulating splines...
	I0414 14:16:01.303344 2246921 client.go:171] duration metric: took 25.3946292s to LocalClient.Create
	I0414 14:16:01.303368 2246921 start.go:167] duration metric: took 25.394704554s to libmachine.API.Create "enable-default-cni-793608"
	I0414 14:16:01.303379 2246921 start.go:293] postStartSetup for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:16:01.303391 2246921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:16:01.303418 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.303684 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:16:01.303712 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.305963 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306296 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.306333 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306447 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.306611 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.306757 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.306883 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.391656 2246921 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:16:01.396053 2246921 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:16:01.396081 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:16:01.396141 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:16:01.396212 2246921 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:16:01.396298 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:16:01.406179 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:01.431109 2246921 start.go:296] duration metric: took 127.714931ms for postStartSetup
	I0414 14:16:01.431163 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:01.431902 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.434569 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.434922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.434956 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.435258 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:16:01.435452 2246921 start.go:128] duration metric: took 25.549370799s to createHost
	I0414 14:16:01.435475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.437807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438150 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.438170 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438300 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.438475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438685 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438882 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.439043 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.439232 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.439248 2246921 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:16:01.543192 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640161.512131339
	
	I0414 14:16:01.543222 2246921 fix.go:216] guest clock: 1744640161.512131339
	I0414 14:16:01.543232 2246921 fix.go:229] Guest: 2025-04-14 14:16:01.512131339 +0000 UTC Remote: 2025-04-14 14:16:01.435464689 +0000 UTC m=+29.759982396 (delta=76.66665ms)
	I0414 14:16:01.543257 2246921 fix.go:200] guest clock delta is within tolerance: 76.66665ms
	I0414 14:16:01.543264 2246921 start.go:83] releasing machines lock for "enable-default-cni-793608", held for 25.657434721s
	I0414 14:16:01.543289 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.543595 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.546776 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547177 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.547209 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547370 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.547937 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548127 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548243 2246921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:16:01.548294 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.548390 2246921 ssh_runner.go:195] Run: cat /version.json
	I0414 14:16:01.548429 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.551187 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551441 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551651 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551769 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.551902 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551943 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.552007 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552128 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.552233 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552436 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.552518 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552664 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.626735 2246921 ssh_runner.go:195] Run: systemctl --version
	I0414 14:16:01.656365 2246921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:16:01.812225 2246921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:16:01.819633 2246921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:16:01.819716 2246921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:16:01.841839 2246921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:16:01.841866 2246921 start.go:495] detecting cgroup driver to use...
	I0414 14:16:01.841952 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:16:01.857973 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:16:01.876392 2246921 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:16:01.876465 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:16:01.890055 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:16:01.903801 2246921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:16:02.017060 2246921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:16:02.157678 2246921 docker.go:233] disabling docker service ...
	I0414 14:16:02.157771 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:16:02.172664 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:16:02.187082 2246921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:16:02.331112 2246921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:16:02.472406 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:16:02.489418 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:16:02.510696 2246921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:16:02.510773 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.523647 2246921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:16:02.523745 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.535466 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.546736 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.559297 2246921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:16:02.571500 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.583906 2246921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.602844 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.615974 2246921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:16:02.628273 2246921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:16:02.628364 2246921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:16:02.643490 2246921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:16:02.654314 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:02.785718 2246921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:16:02.885394 2246921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:16:02.885481 2246921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:16:02.890584 2246921 start.go:563] Will wait 60s for crictl version
	I0414 14:16:02.890644 2246921 ssh_runner.go:195] Run: which crictl
	I0414 14:16:02.894771 2246921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:16:02.944686 2246921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:16:02.944817 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:02.977319 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:03.011026 2246921 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:59.833954 2245195 addons.go:514] duration metric: took 1.193578801s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 14:15:59.976226 2245195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-793608" context rescaled to 1 replicas
	I0414 14:16:01.475533 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.476866 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.011997 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:03.014857 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015311 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:03.015340 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015594 2246921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 14:16:03.020865 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:03.036489 2246921 kubeadm.go:883] updating cluster {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enab
le-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:16:03.036649 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:16:03.036718 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:03.074619 2246921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:16:03.074721 2246921 ssh_runner.go:195] Run: which lz4
	I0414 14:16:03.079439 2246921 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:16:03.084705 2246921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:16:03.084757 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:16:04.551650 2246921 crio.go:462] duration metric: took 1.472256374s to copy over tarball
	I0414 14:16:04.551756 2246921 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:16:05.975760 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:08.138018 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:06.821676 2246921 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269870769s)
	I0414 14:16:06.821713 2246921 crio.go:469] duration metric: took 2.270028033s to extract the tarball
	I0414 14:16:06.821725 2246921 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:16:06.862078 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:06.905635 2246921 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:16:06.905661 2246921 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:16:06.905669 2246921 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.32.2 crio true true} ...
	I0414 14:16:06.905814 2246921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 14:16:06.905913 2246921 ssh_runner.go:195] Run: crio config
	I0414 14:16:06.967144 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:06.967177 2246921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:16:06.967207 2246921 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-793608 NodeName:enable-default-cni-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:16:06.967367 2246921 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:16:06.967440 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:16:06.979475 2246921 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:16:06.979549 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:16:06.989632 2246921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0414 14:16:07.006974 2246921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:16:07.022847 2246921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0414 14:16:07.039334 2246921 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0414 14:16:07.044243 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:07.057149 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:07.178687 2246921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:16:07.197629 2246921 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608 for IP: 192.168.61.51
	I0414 14:16:07.197660 2246921 certs.go:194] generating shared ca certs ...
	I0414 14:16:07.197685 2246921 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.197885 2246921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:16:07.197942 2246921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:16:07.197956 2246921 certs.go:256] generating profile certs ...
	I0414 14:16:07.198029 2246921 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key
	I0414 14:16:07.198048 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt with IP's: []
	I0414 14:16:07.570874 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt ...
	I0414 14:16:07.570904 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: {Name:mk64c63d6e720c22aec573b6c12aa4a432b22501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571092 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key ...
	I0414 14:16:07.571109 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key: {Name:mk0c2d9a7feb9ede0f0a997f4aa74d9da8bd11d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571225 2246921 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3
	I0414 14:16:07.571249 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.51]
	I0414 14:16:07.814982 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 ...
	I0414 14:16:07.815014 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3: {Name:mkeadb0ce7226e84070b03ee54954b097e65052a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815181 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 ...
	I0414 14:16:07.815199 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3: {Name:mk35e329e7bcce4cbc7bc648e6d4baaf541bedca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815273 2246921 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt
	I0414 14:16:07.815343 2246921 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key
	I0414 14:16:07.838493 2246921 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key
	I0414 14:16:07.838529 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt with IP's: []
	I0414 14:16:08.294087 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt ...
	I0414 14:16:08.294124 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt: {Name:mk366e930f55c71d9e0d1a041fc8658466e0adca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348261 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key ...
	I0414 14:16:08.348306 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key: {Name:mk319b3ead18f415068eabdc65c4b137c462dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348591 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:16:08.348644 2246921 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:16:08.348659 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:16:08.348693 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:16:08.348724 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:16:08.348775 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:16:08.348827 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:08.349593 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:16:08.435452 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:16:08.462331 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:16:08.492377 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:16:08.517463 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:16:08.584048 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:16:08.609810 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:16:08.634266 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:16:08.663003 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:16:08.688663 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:16:08.713403 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:16:08.736962 2246921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:16:08.754353 2246921 ssh_runner.go:195] Run: openssl version
	I0414 14:16:08.760345 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:16:08.773588 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789050 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789138 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.801556 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:16:08.818825 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:16:08.835651 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841380 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841444 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.847453 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:16:08.859009 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:16:08.871527 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877272 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877350 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.883496 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:16:08.895900 2246921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:16:08.900786 2246921 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:16:08.900847 2246921 kubeadm.go:392] StartCluster: {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-
default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:16:08.900953 2246921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:16:08.901017 2246921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:16:08.943988 2246921 cri.go:89] found id: ""
	I0414 14:16:08.944083 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:16:08.955727 2246921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:16:08.967585 2246921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:16:08.978749 2246921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:16:08.978778 2246921 kubeadm.go:157] found existing configuration files:
	
	I0414 14:16:08.978835 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:16:08.989765 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:16:08.989846 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:16:09.000464 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:16:09.011408 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:16:09.011475 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:16:09.022110 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.032105 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:16:09.032178 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.044673 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:16:09.056844 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:16:09.056918 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:16:09.069647 2246921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:16:09.269121 2246921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:10.474671 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:10.979638 2245195 node_ready.go:49] node "flannel-793608" has status "Ready":"True"
	I0414 14:16:10.979667 2245195 node_ready.go:38] duration metric: took 11.50763178s for node "flannel-793608" to be "Ready" ...
	I0414 14:16:10.979680 2245195 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:10.994987 2245195 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:13.001584 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:15.501069 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:18.002171 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:19.808099 2246921 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:16:19.808186 2246921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:16:19.808295 2246921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:16:19.808429 2246921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:16:19.808568 2246921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:16:19.808676 2246921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:16:19.810130 2246921 out.go:235]   - Generating certificates and keys ...
	I0414 14:16:19.810238 2246921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:16:19.810298 2246921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:16:19.810365 2246921 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:16:19.810414 2246921 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:16:19.810470 2246921 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:16:19.810534 2246921 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:16:19.810597 2246921 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:16:19.810700 2246921 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810746 2246921 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:16:19.810861 2246921 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810922 2246921 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:16:19.810976 2246921 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:16:19.811019 2246921 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:16:19.811063 2246921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:16:19.811110 2246921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:16:19.811178 2246921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:16:19.811247 2246921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:16:19.811315 2246921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:16:19.811416 2246921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:16:19.811560 2246921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:16:19.811693 2246921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:16:19.813222 2246921 out.go:235]   - Booting up control plane ...
	I0414 14:16:19.813343 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:16:19.813423 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:16:19.813517 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:16:19.813626 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:16:19.813707 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:16:19.813744 2246921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:16:19.813927 2246921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:16:19.814039 2246921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:16:19.814093 2246921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.194914ms
	I0414 14:16:19.814160 2246921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:16:19.814211 2246921 kubeadm.go:310] [api-check] The API server is healthy after 5.003151438s
	I0414 14:16:19.814310 2246921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:16:19.814464 2246921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:16:19.814520 2246921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:16:19.814781 2246921 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:16:19.814844 2246921 kubeadm.go:310] [bootstrap-token] Using token: 3eizlo.lt0uyxdkcw3v7pf4
	I0414 14:16:19.816206 2246921 out.go:235]   - Configuring RBAC rules ...
	I0414 14:16:19.816316 2246921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:16:19.816416 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:16:19.816635 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:16:19.816797 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:16:19.816931 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:16:19.817040 2246921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:16:19.817207 2246921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:16:19.817272 2246921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:16:19.817346 2246921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:16:19.817355 2246921 kubeadm.go:310] 
	I0414 14:16:19.817449 2246921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:16:19.817464 2246921 kubeadm.go:310] 
	I0414 14:16:19.817567 2246921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:16:19.817574 2246921 kubeadm.go:310] 
	I0414 14:16:19.817595 2246921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:16:19.817645 2246921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:16:19.817714 2246921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:16:19.817721 2246921 kubeadm.go:310] 
	I0414 14:16:19.817782 2246921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:16:19.817791 2246921 kubeadm.go:310] 
	I0414 14:16:19.817831 2246921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:16:19.817850 2246921 kubeadm.go:310] 
	I0414 14:16:19.817913 2246921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:16:19.818015 2246921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:16:19.818135 2246921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:16:19.818154 2246921 kubeadm.go:310] 
	I0414 14:16:19.818285 2246921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:16:19.818379 2246921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:16:19.818388 2246921 kubeadm.go:310] 
	I0414 14:16:19.818499 2246921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.818642 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:16:19.818667 2246921 kubeadm.go:310] 	--control-plane 
	I0414 14:16:19.818671 2246921 kubeadm.go:310] 
	I0414 14:16:19.818846 2246921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:16:19.818859 2246921 kubeadm.go:310] 
	I0414 14:16:19.818924 2246921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.819079 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:16:19.819111 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:19.820701 2246921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 14:16:19.822064 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:16:19.833700 2246921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:16:19.853878 2246921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:16:19.853933 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:19.853982 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-793608 minikube.k8s.io/updated_at=2025_04_14T14_16_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=enable-default-cni-793608 minikube.k8s.io/primary=true
	I0414 14:16:19.982063 2246921 ops.go:34] apiserver oom_adj: -16
	I0414 14:16:19.982081 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.483212 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.983097 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.482224 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.982202 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.483188 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.982274 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.483138 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.982281 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:24.127347 2246921 kubeadm.go:1113] duration metric: took 4.273479771s to wait for elevateKubeSystemPrivileges
	I0414 14:16:24.127397 2246921 kubeadm.go:394] duration metric: took 15.226555734s to StartCluster
	I0414 14:16:24.127425 2246921 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.127515 2246921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:16:24.128586 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.128872 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:16:24.128877 2246921 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:16:24.128973 2246921 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:16:24.129079 2246921 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129102 2246921 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-793608"
	I0414 14:16:24.129137 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.129191 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:24.129134 2246921 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129295 2246921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-793608"
	I0414 14:16:24.129659 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129708 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.129784 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129837 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.130608 2246921 out.go:177] * Verifying Kubernetes components...
	I0414 14:16:24.132086 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:24.146823 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0414 14:16:24.147436 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.147995 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.148018 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.148365 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.148957 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.149005 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.150594 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0414 14:16:24.151027 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.151504 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.151528 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.151980 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.152177 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.156031 2246921 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-793608"
	I0414 14:16:24.156084 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.156451 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.156492 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.166981 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0414 14:16:24.167563 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.168160 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.168184 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.168575 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.168767 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.170740 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:24.172584 2246921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:16:20.501644 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:22.501757 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:24.130339 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:16:24.130631 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:16:24.130653 2235858 kubeadm.go:310] 
	I0414 14:16:24.130704 2235858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:16:24.130779 2235858 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:16:24.130797 2235858 kubeadm.go:310] 
	I0414 14:16:24.130844 2235858 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:16:24.130904 2235858 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:16:24.131056 2235858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:16:24.131075 2235858 kubeadm.go:310] 
	I0414 14:16:24.131212 2235858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:16:24.131254 2235858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:16:24.131293 2235858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:16:24.131299 2235858 kubeadm.go:310] 
	I0414 14:16:24.131421 2235858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:16:24.131520 2235858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:16:24.131528 2235858 kubeadm.go:310] 
	I0414 14:16:24.131660 2235858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:16:24.131767 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:16:24.131853 2235858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:16:24.131938 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:16:24.131946 2235858 kubeadm.go:310] 
	I0414 14:16:24.133108 2235858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:24.133245 2235858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:16:24.133343 2235858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:16:24.133446 2235858 kubeadm.go:394] duration metric: took 8m0.052385423s to StartCluster
	I0414 14:16:24.133512 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:16:24.133587 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:16:24.199915 2235858 cri.go:89] found id: ""
	I0414 14:16:24.199946 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.199956 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:16:24.199965 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:16:24.200032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:16:24.247368 2235858 cri.go:89] found id: ""
	I0414 14:16:24.247407 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.247418 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:16:24.247427 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:16:24.247496 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:16:24.288565 2235858 cri.go:89] found id: ""
	I0414 14:16:24.288598 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.288610 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:16:24.288618 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:16:24.288687 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:16:24.329531 2235858 cri.go:89] found id: ""
	I0414 14:16:24.329568 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.329581 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:16:24.329591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:16:24.329663 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:16:24.372326 2235858 cri.go:89] found id: ""
	I0414 14:16:24.372361 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.372370 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:16:24.372376 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:16:24.372447 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:16:24.423414 2235858 cri.go:89] found id: ""
	I0414 14:16:24.423447 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.423460 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:16:24.423469 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:16:24.423534 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:16:24.464828 2235858 cri.go:89] found id: ""
	I0414 14:16:24.464869 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.464882 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:16:24.464890 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:16:24.464970 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:16:24.505791 2235858 cri.go:89] found id: ""
	I0414 14:16:24.505820 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.505830 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:16:24.505844 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:16:24.505860 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:16:24.571908 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:16:24.571951 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:16:24.589579 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:16:24.589614 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:16:24.680606 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:16:24.680637 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:16:24.680659 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:16:24.800813 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:16:24.800859 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 14:16:24.849704 2235858 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:16:24.849777 2235858 out.go:270] * 
	W0414 14:16:24.849842 2235858 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.849868 2235858 out.go:270] * 
	W0414 14:16:24.851036 2235858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:16:24.854829 2235858 out.go:201] 
	W0414 14:16:24.856198 2235858 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.856246 2235858 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:16:24.856269 2235858 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:16:24.857740 2235858 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 14:16:25 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:25.975743380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640185975710424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e670889c-3cc2-4cbe-b24d-5545cf1d883b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:25 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:25.976282445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76ac076e-065c-485b-8de7-d5d0cec30d2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:25 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:25.976356820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76ac076e-065c-485b-8de7-d5d0cec30d2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:25 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:25.976398102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=76ac076e-065c-485b-8de7-d5d0cec30d2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.021498147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=edc367f7-f859-4d77-9476-7be9e081a282 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.021581686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edc367f7-f859-4d77-9476-7be9e081a282 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.022976165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c1c7f05-b728-4a88-ad13-3c823eb9d0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.023489687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640186023465710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c1c7f05-b728-4a88-ad13-3c823eb9d0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.025069720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f58227ec-16e5-4e0e-a5c9-c887eb41c121 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.025140956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f58227ec-16e5-4e0e-a5c9-c887eb41c121 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.025171671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f58227ec-16e5-4e0e-a5c9-c887eb41c121 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.062108109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a32d2fb6-b384-454a-a94c-6cce5797acb5 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.062189600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a32d2fb6-b384-454a-a94c-6cce5797acb5 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.063350022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ec259bc-a31f-406e-af24-4be3067427e4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.063748289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640186063718679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ec259bc-a31f-406e-af24-4be3067427e4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.064411445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d7ad45c-b2df-480e-9cdc-cc1db557d7d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.064478390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d7ad45c-b2df-480e-9cdc-cc1db557d7d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.064517151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9d7ad45c-b2df-480e-9cdc-cc1db557d7d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.101728862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ea1f61f-5a87-4c94-aacd-77beee333e9f name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.101843177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ea1f61f-5a87-4c94-aacd-77beee333e9f name=/runtime.v1.RuntimeService/Version
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.103599637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51ff6e41-dfd5-42a6-9fb2-d621decd73a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.104153937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640186104014224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51ff6e41-dfd5-42a6-9fb2-d621decd73a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.104841652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55bd4cc1-5a6a-4420-9862-19ba4ff88319 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.104913965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55bd4cc1-5a6a-4420-9862-19ba4ff88319 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:16:26 old-k8s-version-954411 crio[632]: time="2025-04-14 14:16:26.104971356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=55bd4cc1-5a6a-4420-9862-19ba4ff88319 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 14:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055482] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043064] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr14 14:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.836790] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.084082] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.058063] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072240] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.169515] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.152610] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.265682] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +8.281644] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060503] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.889080] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.358788] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 14:12] systemd-fstab-generator[5015]: Ignoring "noauto" option for root device
	[Apr14 14:14] systemd-fstab-generator[5297]: Ignoring "noauto" option for root device
	[  +0.108430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:16:26 up 8 min,  0 users,  load average: 0.01, 0.07, 0.03
	Linux old-k8s-version-954411 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: net.(*Dialer).DialContext(0xc000b3d1a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c7ebd0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b484a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c7ebd0, 0x24, 0x60, 0x7f8d5c16c2a8, 0x118, ...)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: net/http.(*Transport).dial(0xc0005f9400, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c7ebd0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: net/http.(*Transport).dialConn(0xc0005f9400, 0x4f7fe00, 0xc000052030, 0x0, 0xc000356600, 0x5, 0xc000c7ebd0, 0x24, 0x0, 0xc000c1cfc0, ...)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: net/http.(*Transport).dialConnFor(0xc0005f9400, 0xc000c1bd90)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: created by net/http.(*Transport).queueForDial
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: goroutine 172 [select]:
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000cca2a0, 0xc000b78e00, 0xc000c9d320, 0xc000c9d2c0)
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]: created by net.(*netFD).connect
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5475]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 14 14:16:25 old-k8s-version-954411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 14 14:16:25 old-k8s-version-954411 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 14:16:25 old-k8s-version-954411 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5550]: I0414 14:16:25.817780    5550 server.go:416] Version: v1.20.0
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5550]: I0414 14:16:25.818136    5550 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5550]: I0414 14:16:25.820179    5550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5550]: W0414 14:16:25.821008    5550 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 14:16:25 old-k8s-version-954411 kubelet[5550]: I0414 14:16:25.821240    5550 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (252.160587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-954411" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:16:30.126091 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:16:51.059593 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:17:25.775141 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:17:36.311097 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:22.288678 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.295051 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.306422 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.327815 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.369293 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.450827 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:22.612443 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:22.933916 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:23.576048 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:24.857745 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:27.419174 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:32.540689 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:42.783039 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:46.263450 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:50.970602 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:50.977037 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:50.988345 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:51.009675 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:51.051087 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:51.132593 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:51.294179 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:51.615993 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:52.257873 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:53.539740 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:18:56.102120 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:01.224061 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:03.264426 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:11.465747 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:13.967950 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:31.947291 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:34.279174 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.285641 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.297068 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.318557 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.360095 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.441615 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:34.603269 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:34.924642 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:35.566855 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:36.849138 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:39.410771 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:44.226185 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:19:44.532102 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:52.448032 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:19:54.773594 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:04.999187 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.005669 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.017148 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.038615 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.080115 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.161687 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.323421 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:05.645219 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:06.286738 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:07.568572 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:10.130265 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:12.908692 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:15.251895 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:15.255290 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:20.153106 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:25.493555 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:26.456376 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:26.462854 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:26.474310 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:26.495797 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:26.537387 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:26.618997 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:26.780636 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:27.102457 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:27.744514 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:20:27.986231 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:29.026354 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:31.588219 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:36.710393 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:45.975081 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:46.951962 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:20:56.217331 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:06.148213 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:07.433455 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:26.285678 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.292159 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.303511 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.324996 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.366425 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.447973 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.609695 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:26.931152 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:26.936567 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:21:27.572532 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:28.854739 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:31.416415 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:34.830983 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:36.537927 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:46.779305 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:21:48.395563 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:03.833335 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:03.839741 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:03.851099 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:03.872512 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:03.913925 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:03.995403 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:04.157020 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:22:04.479061 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:05.121175 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:06.403311 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:07.260714 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:08.964630 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:14.086160 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:18.138721 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:24.328503 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:25.774509 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:44.810882 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:48.223025 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:22:48.857941 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:10.316905 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:22.288481 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:25.772675 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:46.263243 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:49.989888 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:23:50.970052 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:24:10.144423 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:24:18.673120 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:24:34.279275 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:24:47.694859 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:24:52.447821 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:01.980282 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:04.999013 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:26.456860 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (233.08887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-954411" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (216.409298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-954411 logs -n 25
E0414 14:25:27.986150 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:15:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:15:31.712686 2246921 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:15:31.712831 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.712841 2246921 out.go:358] Setting ErrFile to fd 2...
	I0414 14:15:31.712845 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.713023 2246921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:15:31.713616 2246921 out.go:352] Setting JSON to false
	I0414 14:15:31.714831 2246921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":169071,"bootTime":1744471061,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:15:31.714947 2246921 start.go:139] virtualization: kvm guest
	I0414 14:15:31.717011 2246921 out.go:177] * [enable-default-cni-793608] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:15:31.718463 2246921 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:15:31.718471 2246921 notify.go:220] Checking for updates...
	I0414 14:15:31.720654 2246921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:15:31.721764 2246921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:31.722980 2246921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:31.724178 2246921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:15:31.725315 2246921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:15:31.727113 2246921 config.go:182] Loaded profile config "bridge-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727265 2246921 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727430 2246921 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:15:31.727563 2246921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:15:31.767165 2246921 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:15:31.768293 2246921 start.go:297] selected driver: kvm2
	I0414 14:15:31.768305 2246921 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:15:31.768317 2246921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:15:31.769036 2246921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.769109 2246921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:15:31.784672 2246921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:15:31.784720 2246921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0414 14:15:31.784990 2246921 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0414 14:15:31.785021 2246921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:15:31.785052 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:15:31.785058 2246921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:15:31.785117 2246921 start.go:340] cluster config:
	{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:31.785199 2246921 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.786961 2246921 out.go:177] * Starting "enable-default-cni-793608" primary control-plane node in "enable-default-cni-793608" cluster
	I0414 14:15:29.994679 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:29.995209 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find current IP address of domain flannel-793608 in network mk-flannel-793608
	I0414 14:15:29.995234 2245195 main.go:141] libmachine: (flannel-793608) DBG | I0414 14:15:29.995171 2245218 retry.go:31] will retry after 4.26066759s: waiting for domain to come up
	I0414 14:15:34.260693 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261277 2245195 main.go:141] libmachine: (flannel-793608) found domain IP: 192.168.72.179
	I0414 14:15:34.261303 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has current primary IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261308 2245195 main.go:141] libmachine: (flannel-793608) reserving static IP address...
	I0414 14:15:34.261716 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find host DHCP lease matching {name: "flannel-793608", mac: "52:54:00:62:9d:72", ip: "192.168.72.179"} in network mk-flannel-793608
	I0414 14:15:34.346350 2245195 main.go:141] libmachine: (flannel-793608) reserved static IP address 192.168.72.179 for domain flannel-793608
	I0414 14:15:34.346390 2245195 main.go:141] libmachine: (flannel-793608) waiting for SSH...
	I0414 14:15:34.346401 2245195 main.go:141] libmachine: (flannel-793608) DBG | Getting to WaitForSSH function...
	I0414 14:15:34.349135 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.349868 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.349899 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.350052 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH client type: external
	I0414 14:15:34.350078 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa (-rw-------)
	I0414 14:15:34.350116 2245195 main.go:141] libmachine: (flannel-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:15:34.350131 2245195 main.go:141] libmachine: (flannel-793608) DBG | About to run SSH command:
	I0414 14:15:34.350150 2245195 main.go:141] libmachine: (flannel-793608) DBG | exit 0
	I0414 14:15:34.484913 2245195 main.go:141] libmachine: (flannel-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:15:34.485227 2245195 main.go:141] libmachine: (flannel-793608) KVM machine creation complete
	I0414 14:15:34.485544 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:34.486221 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486401 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486553 2245195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:15:34.486568 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:34.487978 2245195 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:15:34.487993 2245195 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:15:34.488000 2245195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:15:34.488008 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.490564 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.490891 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.490935 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.491086 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.491262 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491420 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491570 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.491735 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.491982 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.491998 2245195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:15:34.604245 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.604277 2245195 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:15:34.604289 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.606969 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607364 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.607394 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607479 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.607712 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.607871 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.608010 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.608176 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.608423 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.608435 2245195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:15:31.788237 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:31.788265 2246921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:15:31.788272 2246921 cache.go:56] Caching tarball of preloaded images
	I0414 14:15:31.788346 2246921 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:15:31.788355 2246921 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:15:31.788446 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:15:31.788463 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json: {Name:mkf77fb616cb68a05b6b927a1d1b666f496a2e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:31.788580 2246921 start.go:360] acquireMachinesLock for enable-default-cni-793608: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:15:35.885793 2246921 start.go:364] duration metric: took 4.097174218s to acquireMachinesLock for "enable-default-cni-793608"
	I0414 14:15:35.885866 2246921 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:35.886064 2246921 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:15:35.888060 2246921 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 14:15:35.888295 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:35.888367 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:35.906793 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0414 14:15:35.907218 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:35.907761 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:15:35.907787 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:35.908162 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:35.908377 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:15:35.908506 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:15:35.908667 2246921 start.go:159] libmachine.API.Create for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:15:35.908702 2246921 client.go:168] LocalClient.Create starting
	I0414 14:15:35.908763 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:15:35.908804 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908828 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.908911 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:15:35.908946 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908967 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.909001 2246921 main.go:141] libmachine: Running pre-create checks...
	I0414 14:15:35.909014 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .PreCreateCheck
	I0414 14:15:35.909444 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:15:35.909876 2246921 main.go:141] libmachine: Creating machine...
	I0414 14:15:35.909891 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Create
	I0414 14:15:35.910047 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating KVM machine...
	I0414 14:15:35.910070 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating network...
	I0414 14:15:35.911361 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found existing default KVM network
	I0414 14:15:35.912285 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912133 2246989 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:97:78} reservation:<nil>}
	I0414 14:15:35.913042 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912966 2246989 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:99:3f} reservation:<nil>}
	I0414 14:15:35.914019 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.913920 2246989 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292a90}
	I0414 14:15:35.914054 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | created network xml: 
	I0414 14:15:35.914078 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | <network>
	I0414 14:15:35.914088 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <name>mk-enable-default-cni-793608</name>
	I0414 14:15:35.914099 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <dns enable='no'/>
	I0414 14:15:35.914108 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914122 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 14:15:35.914132 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     <dhcp>
	I0414 14:15:35.914142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 14:15:35.914154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     </dhcp>
	I0414 14:15:35.914161 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   </ip>
	I0414 14:15:35.914173 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914184 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | </network>
	I0414 14:15:35.914202 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | 
	I0414 14:15:35.919363 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | trying to create private KVM network mk-enable-default-cni-793608 192.168.61.0/24...
	I0414 14:15:36.004404 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | private KVM network mk-enable-default-cni-793608 192.168.61.0/24 created
	I0414 14:15:36.004444 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.004472 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.004346 2246989 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.004493 2246921 main.go:141] libmachine: (enable-default-cni-793608) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:15:36.004516 2246921 main.go:141] libmachine: (enable-default-cni-793608) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:15:36.310781 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.310631 2246989 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa...
	I0414 14:15:36.425010 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.424863 2246989 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk...
	I0414 14:15:36.425050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing magic tar header
	I0414 14:15:36.425064 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing SSH key tar header
	I0414 14:15:36.425072 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.425023 2246989 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.425218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608
	I0414 14:15:36.425260 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 (perms=drwx------)
	I0414 14:15:36.425270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:15:36.425291 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.425304 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:15:36.425315 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:15:36.425326 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins
	I0414 14:15:36.425339 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:15:36.425360 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:15:36.425371 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:15:36.425379 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home
	I0414 14:15:36.425391 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | skipping /home - not owner
	I0414 14:15:36.425401 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:15:36.425420 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:15:36.425433 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:36.426763 2246921 main.go:141] libmachine: (enable-default-cni-793608) define libvirt domain using xml: 
	I0414 14:15:36.426788 2246921 main.go:141] libmachine: (enable-default-cni-793608) <domain type='kvm'>
	I0414 14:15:36.426799 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <name>enable-default-cni-793608</name>
	I0414 14:15:36.426807 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <memory unit='MiB'>3072</memory>
	I0414 14:15:36.426816 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <vcpu>2</vcpu>
	I0414 14:15:36.426832 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <features>
	I0414 14:15:36.426844 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <acpi/>
	I0414 14:15:36.426858 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <apic/>
	I0414 14:15:36.426869 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <pae/>
	I0414 14:15:36.426877 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.426882 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </features>
	I0414 14:15:36.426902 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <cpu mode='host-passthrough'>
	I0414 14:15:36.426909 2246921 main.go:141] libmachine: (enable-default-cni-793608)   
	I0414 14:15:36.426914 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </cpu>
	I0414 14:15:36.426921 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <os>
	I0414 14:15:36.426925 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <type>hvm</type>
	I0414 14:15:36.426963 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='cdrom'/>
	I0414 14:15:36.427000 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='hd'/>
	I0414 14:15:36.427014 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <bootmenu enable='no'/>
	I0414 14:15:36.427038 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </os>
	I0414 14:15:36.427051 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <devices>
	I0414 14:15:36.427067 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='cdrom'>
	I0414 14:15:36.427085 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/boot2docker.iso'/>
	I0414 14:15:36.427097 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hdc' bus='scsi'/>
	I0414 14:15:36.427109 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <readonly/>
	I0414 14:15:36.427119 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427129 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='disk'>
	I0414 14:15:36.427147 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:15:36.427170 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk'/>
	I0414 14:15:36.427181 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hda' bus='virtio'/>
	I0414 14:15:36.427194 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427205 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427218 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='mk-enable-default-cni-793608'/>
	I0414 14:15:36.427231 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427247 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427259 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427267 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='default'/>
	I0414 14:15:36.427279 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427299 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427311 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <serial type='pty'>
	I0414 14:15:36.427326 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target port='0'/>
	I0414 14:15:36.427338 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </serial>
	I0414 14:15:36.427349 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <console type='pty'>
	I0414 14:15:36.427370 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target type='serial' port='0'/>
	I0414 14:15:36.427397 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </console>
	I0414 14:15:36.427410 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <rng model='virtio'>
	I0414 14:15:36.427421 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <backend model='random'>/dev/random</backend>
	I0414 14:15:36.427432 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </rng>
	I0414 14:15:36.427442 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427451 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427464 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </devices>
	I0414 14:15:36.427475 2246921 main.go:141] libmachine: (enable-default-cni-793608) </domain>
	I0414 14:15:36.427488 2246921 main.go:141] libmachine: (enable-default-cni-793608) 
	I0414 14:15:36.431881 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:85:82:bc in network default
	I0414 14:15:36.432649 2246921 main.go:141] libmachine: (enable-default-cni-793608) starting domain...
	I0414 14:15:36.432690 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:36.432705 2246921 main.go:141] libmachine: (enable-default-cni-793608) ensuring networks are active...
	I0414 14:15:36.433501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network default is active
	I0414 14:15:36.433815 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network mk-enable-default-cni-793608 is active
	I0414 14:15:36.434345 2246921 main.go:141] libmachine: (enable-default-cni-793608) getting domain XML...
	I0414 14:15:36.435023 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:34.721833 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:15:34.721950 2245195 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:15:34.721968 2245195 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:15:34.721980 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722264 2245195 buildroot.go:166] provisioning hostname "flannel-793608"
	I0414 14:15:34.722299 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722517 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.725190 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725590 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.725618 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725786 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.725976 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726158 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726304 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.726456 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.726666 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.726685 2245195 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-793608 && echo "flannel-793608" | sudo tee /etc/hostname
	I0414 14:15:34.856671 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-793608
	
	I0414 14:15:34.856706 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.859492 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.859878 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.859918 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.860081 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.860306 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860473 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860626 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.860812 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.861092 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.861118 2245195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:15:34.981989 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.982020 2245195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:15:34.982064 2245195 buildroot.go:174] setting up certificates
	I0414 14:15:34.982083 2245195 provision.go:84] configureAuth start
	I0414 14:15:34.982100 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.982387 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:34.985287 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985634 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.985664 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985812 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.987950 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988286 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.988317 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988471 2245195 provision.go:143] copyHostCerts
	I0414 14:15:34.988524 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:15:34.988534 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:15:34.988599 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:15:34.988693 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:15:34.988701 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:15:34.988724 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:15:34.988819 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:15:34.988834 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:15:34.988863 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:15:34.988910 2245195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.flannel-793608 san=[127.0.0.1 192.168.72.179 flannel-793608 localhost minikube]
	I0414 14:15:35.242680 2245195 provision.go:177] copyRemoteCerts
	I0414 14:15:35.242795 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:15:35.242845 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.246504 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.246882 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.246915 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.247123 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.247346 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.247546 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.247691 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.335122 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:15:35.359746 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 14:15:35.383458 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:15:35.406827 2245195 provision.go:87] duration metric: took 424.726599ms to configureAuth
	I0414 14:15:35.406858 2245195 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:15:35.407035 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:35.407113 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.409975 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410322 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.410352 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410487 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.410685 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410854 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410996 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.411145 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.411363 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.411378 2245195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:15:35.634723 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:15:35.634754 2245195 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:15:35.634762 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetURL
	I0414 14:15:35.636108 2245195 main.go:141] libmachine: (flannel-793608) DBG | using libvirt version 6000000
	I0414 14:15:35.638402 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638738 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.638770 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638978 2245195 main.go:141] libmachine: Docker is up and running!
	I0414 14:15:35.638990 2245195 main.go:141] libmachine: Reticulating splines...
	I0414 14:15:35.638999 2245195 client.go:171] duration metric: took 25.896323518s to LocalClient.Create
	I0414 14:15:35.639031 2245195 start.go:167] duration metric: took 25.896405712s to libmachine.API.Create "flannel-793608"
	I0414 14:15:35.639044 2245195 start.go:293] postStartSetup for "flannel-793608" (driver="kvm2")
	I0414 14:15:35.639058 2245195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:15:35.639082 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.639326 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:15:35.639354 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.641386 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641767 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.641796 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641940 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.642082 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.642270 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.642382 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.727765 2245195 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:15:35.732019 2245195 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:15:35.732052 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:15:35.732122 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:15:35.732246 2245195 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:15:35.732379 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:15:35.742061 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:35.766562 2245195 start.go:296] duration metric: took 127.496422ms for postStartSetup
	I0414 14:15:35.766624 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:35.767287 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.770180 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770527 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.770556 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770795 2245195 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/config.json ...
	I0414 14:15:35.771009 2245195 start.go:128] duration metric: took 26.050328808s to createHost
	I0414 14:15:35.771033 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.773350 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773680 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.773709 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773847 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.774059 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774197 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774332 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.774490 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.774772 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.774784 2245195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:15:35.885598 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640135.860958804
	
	I0414 14:15:35.885628 2245195 fix.go:216] guest clock: 1744640135.860958804
	I0414 14:15:35.885639 2245195 fix.go:229] Guest: 2025-04-14 14:15:35.860958804 +0000 UTC Remote: 2025-04-14 14:15:35.771023131 +0000 UTC m=+26.173579221 (delta=89.935673ms)
	I0414 14:15:35.885673 2245195 fix.go:200] guest clock delta is within tolerance: 89.935673ms
	I0414 14:15:35.885683 2245195 start.go:83] releasing machines lock for "flannel-793608", held for 26.165125753s
	I0414 14:15:35.885713 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.886039 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.889061 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889425 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.889466 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889637 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890211 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890425 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890536 2245195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:15:35.890579 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.890691 2245195 ssh_runner.go:195] Run: cat /version.json
	I0414 14:15:35.890721 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.893586 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893869 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893934 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.893981 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894247 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894384 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.894411 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894457 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894574 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894629 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.894739 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894808 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.894924 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.895057 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.982011 2245195 ssh_runner.go:195] Run: systemctl --version
	I0414 14:15:36.008338 2245195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:15:36.168391 2245195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:15:36.174476 2245195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:15:36.174551 2245195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:15:36.191051 2245195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:15:36.191080 2245195 start.go:495] detecting cgroup driver to use...
	I0414 14:15:36.191168 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:15:36.209096 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:15:36.223881 2245195 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:15:36.223954 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:15:36.239607 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:15:36.254647 2245195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:15:36.382628 2245195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:15:36.567479 2245195 docker.go:233] disabling docker service ...
	I0414 14:15:36.567573 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:15:36.583824 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:15:36.597712 2245195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:15:36.773681 2245195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:15:36.916917 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:15:36.935946 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:15:36.958970 2245195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:15:36.959024 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.972811 2245195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:15:36.972871 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.988108 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.003343 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.018161 2245195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:15:37.030406 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.043236 2245195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.064170 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.080502 2245195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:15:37.094496 2245195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:15:37.094554 2245195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:15:37.109299 2245195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:15:37.120177 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:37.270593 2245195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:15:37.363308 2245195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:15:37.363395 2245195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:15:37.368889 2245195 start.go:563] Will wait 60s for crictl version
	I0414 14:15:37.368989 2245195 ssh_runner.go:195] Run: which crictl
	I0414 14:15:37.373260 2245195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:15:37.419353 2245195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:15:37.419459 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.452713 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.488597 2245195 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:37.489796 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:37.493160 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.493715 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:37.493740 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.494018 2245195 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 14:15:37.499012 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:37.512911 2245195 kubeadm.go:883] updating cluster {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:15:37.513053 2245195 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:37.513119 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:37.548903 2245195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:15:37.548981 2245195 ssh_runner.go:195] Run: which lz4
	I0414 14:15:37.553268 2245195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:15:37.557856 2245195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:15:37.557890 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:15:39.123096 2245195 crio.go:462] duration metric: took 1.569856354s to copy over tarball
	I0414 14:15:39.123200 2245195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:15:37.970496 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for IP...
	I0414 14:15:37.971657 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:37.972252 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:37.972347 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:37.972267 2246989 retry.go:31] will retry after 263.370551ms: waiting for domain to come up
	I0414 14:15:38.238079 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.238915 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.238941 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.238830 2246989 retry.go:31] will retry after 385.607481ms: waiting for domain to come up
	I0414 14:15:38.626321 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.627021 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.627050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.626998 2246989 retry.go:31] will retry after 445.201612ms: waiting for domain to come up
	I0414 14:15:39.073922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.074637 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.074669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.074614 2246989 retry.go:31] will retry after 401.280526ms: waiting for domain to come up
	I0414 14:15:39.477622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.478402 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.478431 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.478359 2246989 retry.go:31] will retry after 525.224065ms: waiting for domain to come up
	I0414 14:15:40.005081 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.005652 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.005679 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.005612 2246989 retry.go:31] will retry after 886.00622ms: waiting for domain to come up
	I0414 14:15:40.893950 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.894495 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.894532 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.894465 2246989 retry.go:31] will retry after 854.182582ms: waiting for domain to come up
	I0414 14:15:41.493709 2245195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.370463717s)
	I0414 14:15:41.493748 2245195 crio.go:469] duration metric: took 2.370608674s to extract the tarball
	I0414 14:15:41.493759 2245195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:15:41.535292 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:41.588898 2245195 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:15:41.588940 2245195 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:15:41.588949 2245195 kubeadm.go:934] updating node { 192.168.72.179 8443 v1.32.2 crio true true} ...
	I0414 14:15:41.589074 2245195 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 14:15:41.589140 2245195 ssh_runner.go:195] Run: crio config
	I0414 14:15:41.654490 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:41.654526 2245195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:15:41.654559 2245195 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.179 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-793608 NodeName:flannel-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:15:41.654767 2245195 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:15:41.654853 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:15:41.665504 2245195 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:15:41.665589 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:15:41.675974 2245195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 14:15:41.694468 2245195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:15:41.712581 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 14:15:41.731194 2245195 ssh_runner.go:195] Run: grep 192.168.72.179	control-plane.minikube.internal$ /etc/hosts
	I0414 14:15:41.735372 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:41.748968 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:41.865867 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:41.886997 2245195 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608 for IP: 192.168.72.179
	I0414 14:15:41.887023 2245195 certs.go:194] generating shared ca certs ...
	I0414 14:15:41.887041 2245195 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:41.887257 2245195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:15:41.887344 2245195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:15:41.887359 2245195 certs.go:256] generating profile certs ...
	I0414 14:15:41.887451 2245195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key
	I0414 14:15:41.887472 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt with IP's: []
	I0414 14:15:42.047090 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt ...
	I0414 14:15:42.047130 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: {Name:mk61725d6c2d598935bcc4ddc3016fd5f2c41ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047361 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key ...
	I0414 14:15:42.047378 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key: {Name:mk34fa3cf8ab863f5f74888d1351e7b4a1a82440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047497 2245195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052
	I0414 14:15:42.047517 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.179]
	I0414 14:15:42.148599 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 ...
	I0414 14:15:42.148638 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052: {Name:mk1db924027905394f8766631f4c71ead06a8ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.148885 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 ...
	I0414 14:15:42.148907 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052: {Name:mkbaf3ac23585ef0764dcb14eee50a6ebe5b28d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.149024 2245195 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt
	I0414 14:15:42.149140 2245195 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key
	I0414 14:15:42.149237 2245195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key
	I0414 14:15:42.149261 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt with IP's: []
	I0414 14:15:42.494187 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt ...
	I0414 14:15:42.494227 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt: {Name:mk0e79a8197af3196f139854e3ee11b8a9027e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494439 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key ...
	I0414 14:15:42.494459 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key: {Name:mk9593886f9fd4b010d5b9a09f833fed6848aae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494757 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:15:42.494818 2245195 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:15:42.494832 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:15:42.494857 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:15:42.494883 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:15:42.494912 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:15:42.494953 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:42.495564 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:15:42.528844 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:15:42.560780 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:15:42.606054 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:15:42.646740 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 14:15:42.680301 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:15:42.711568 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:15:42.740840 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:15:42.771555 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:15:42.807236 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:15:42.835699 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:15:42.863445 2245195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:15:42.883659 2245195 ssh_runner.go:195] Run: openssl version
	I0414 14:15:42.890583 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:15:42.901664 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906367 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906428 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.912610 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:15:42.923894 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:15:42.935385 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940238 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940307 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.946322 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:15:42.960753 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:15:42.973724 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979243 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979299 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.985427 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:15:42.996662 2245195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:15:43.001220 2245195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:15:43.001300 2245195 kubeadm.go:392] StartCluster: {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:43.001402 2245195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:15:43.001459 2245195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:15:43.038601 2245195 cri.go:89] found id: ""
	I0414 14:15:43.038700 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:15:43.049342 2245195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:15:43.059616 2245195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:15:43.070826 2245195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:15:43.070850 2245195 kubeadm.go:157] found existing configuration files:
	
	I0414 14:15:43.070910 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:15:43.081463 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:15:43.081530 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:15:43.091483 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:15:43.103049 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:15:43.103137 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:15:43.113237 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.124160 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:15:43.124230 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.138965 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:15:43.153232 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:15:43.153306 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:15:43.167864 2245195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:15:43.400744 2245195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:15:41.750230 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:41.750751 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:41.750807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:41.750744 2246989 retry.go:31] will retry after 1.224694163s: waiting for domain to come up
	I0414 14:15:42.976809 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:42.977336 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:42.977384 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:42.977328 2246989 retry.go:31] will retry after 1.264920996s: waiting for domain to come up
	I0414 14:15:44.243549 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:44.244159 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:44.244193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:44.244066 2246989 retry.go:31] will retry after 1.517311486s: waiting for domain to come up
	I0414 14:15:45.763600 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:45.764116 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:45.764135 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:45.764091 2246989 retry.go:31] will retry after 1.746471018s: waiting for domain to come up
	I0414 14:15:44.130732 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:44.130993 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:47.511868 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:47.512619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:47.512650 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:47.512522 2246989 retry.go:31] will retry after 3.501788139s: waiting for domain to come up
	I0414 14:15:51.016231 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:51.016805 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:51.016837 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:51.016759 2246989 retry.go:31] will retry after 3.940965891s: waiting for domain to come up
	I0414 14:15:54.321686 2245195 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:15:54.321774 2245195 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:15:54.321884 2245195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:15:54.322091 2245195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:15:54.322219 2245195 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:15:54.322316 2245195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:15:54.323900 2245195 out.go:235]   - Generating certificates and keys ...
	I0414 14:15:54.323989 2245195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:15:54.324068 2245195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:15:54.324163 2245195 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:15:54.324244 2245195 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:15:54.324357 2245195 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:15:54.324444 2245195 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:15:54.324558 2245195 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:15:54.324765 2245195 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.324837 2245195 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:15:54.325003 2245195 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.325062 2245195 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:15:54.325116 2245195 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:15:54.325157 2245195 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:15:54.325240 2245195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:15:54.325297 2245195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:15:54.325361 2245195 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:15:54.325410 2245195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:15:54.325469 2245195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:15:54.325533 2245195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:15:54.325622 2245195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:15:54.325680 2245195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:15:54.326976 2245195 out.go:235]   - Booting up control plane ...
	I0414 14:15:54.327061 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:15:54.327129 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:15:54.327223 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:15:54.327393 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:15:54.327473 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:15:54.327543 2245195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:15:54.327735 2245195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:15:54.327895 2245195 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:15:54.327988 2245195 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78321ms
	I0414 14:15:54.328108 2245195 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:15:54.328217 2245195 kubeadm.go:310] [api-check] The API server is healthy after 5.502171207s
	I0414 14:15:54.328371 2245195 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:15:54.328532 2245195 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:15:54.328601 2245195 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:15:54.328798 2245195 kubeadm.go:310] [mark-control-plane] Marking the node flannel-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:15:54.328865 2245195 kubeadm.go:310] [bootstrap-token] Using token: zu89f8.zeaf2f1xfahm8xki
	I0414 14:15:54.330659 2245195 out.go:235]   - Configuring RBAC rules ...
	I0414 14:15:54.330777 2245195 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:15:54.330853 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:15:54.330999 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:15:54.331151 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:15:54.331343 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:15:54.331475 2245195 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:15:54.331629 2245195 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:15:54.331710 2245195 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:15:54.331776 2245195 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:15:54.331786 2245195 kubeadm.go:310] 
	I0414 14:15:54.331859 2245195 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:15:54.331868 2245195 kubeadm.go:310] 
	I0414 14:15:54.331988 2245195 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:15:54.331996 2245195 kubeadm.go:310] 
	I0414 14:15:54.332023 2245195 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:15:54.332081 2245195 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:15:54.332156 2245195 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:15:54.332174 2245195 kubeadm.go:310] 
	I0414 14:15:54.332254 2245195 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:15:54.332264 2245195 kubeadm.go:310] 
	I0414 14:15:54.332330 2245195 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:15:54.332345 2245195 kubeadm.go:310] 
	I0414 14:15:54.332421 2245195 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:15:54.332536 2245195 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:15:54.332628 2245195 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:15:54.332638 2245195 kubeadm.go:310] 
	I0414 14:15:54.332771 2245195 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:15:54.332848 2245195 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:15:54.332854 2245195 kubeadm.go:310] 
	I0414 14:15:54.332922 2245195 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333010 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:15:54.333034 2245195 kubeadm.go:310] 	--control-plane 
	I0414 14:15:54.333039 2245195 kubeadm.go:310] 
	I0414 14:15:54.333109 2245195 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:15:54.333115 2245195 kubeadm.go:310] 
	I0414 14:15:54.333216 2245195 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333391 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:15:54.333407 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:54.334755 2245195 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 14:15:54.335890 2245195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:15:54.344160 2245195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:15:54.344176 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 14:15:54.374891 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:15:54.962412 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:54.963168 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:54.963191 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:54.963134 2246989 retry.go:31] will retry after 5.168467899s: waiting for domain to come up
	I0414 14:15:54.872301 2245195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:15:54.872398 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:54.872433 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-793608 minikube.k8s.io/updated_at=2025_04_14T14_15_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=flannel-793608 minikube.k8s.io/primary=true
	I0414 14:15:54.889203 2245195 ops.go:34] apiserver oom_adj: -16
	I0414 14:15:55.015715 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:55.515973 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.016052 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.515895 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.015870 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.516553 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.016409 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.516652 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.638498 2245195 kubeadm.go:1113] duration metric: took 3.766167061s to wait for elevateKubeSystemPrivileges
	I0414 14:15:58.638542 2245195 kubeadm.go:394] duration metric: took 15.637248519s to StartCluster
	I0414 14:15:58.638569 2245195 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.638677 2245195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:58.640030 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.640295 2245195 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:58.640313 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:15:58.640376 2245195 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:15:58.640504 2245195 addons.go:69] Setting storage-provisioner=true in profile "flannel-793608"
	I0414 14:15:58.640526 2245195 addons.go:69] Setting default-storageclass=true in profile "flannel-793608"
	I0414 14:15:58.640547 2245195 addons.go:238] Setting addon storage-provisioner=true in "flannel-793608"
	I0414 14:15:58.640550 2245195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-793608"
	I0414 14:15:58.640593 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.640513 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:58.641023 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641041 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641052 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.641080 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.642641 2245195 out.go:177] * Verifying Kubernetes components...
	I0414 14:15:58.644038 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:58.657672 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I0414 14:15:58.657684 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0414 14:15:58.658211 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658255 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658709 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658741 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.659096 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659109 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659278 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.659593 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.659622 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.662886 2245195 addons.go:238] Setting addon default-storageclass=true in "flannel-793608"
	I0414 14:15:58.662943 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.663326 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.663378 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.676384 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0414 14:15:58.677014 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.677627 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.677663 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.678164 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.678390 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.680209 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0414 14:15:58.680777 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.680982 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.681468 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.681494 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.681912 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.682367 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.682406 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.682479 2245195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:15:58.683790 2245195 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:58.683805 2245195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:15:58.683823 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.687182 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.687747 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.687772 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.688014 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.688156 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.688286 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.688424 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.704623 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0414 14:15:58.705030 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.705522 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.705545 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.705873 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.706088 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.707899 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.708169 2245195 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:58.708185 2245195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:15:58.708207 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.711345 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.711798 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.711837 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.712036 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.712219 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.712341 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.712475 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.899648 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:58.899700 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:15:59.086139 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:59.182264 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:59.471260 2245195 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 14:15:59.471370 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.471390 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472005 2245195 node_ready.go:35] waiting up to 15m0s for node "flannel-793608" to be "Ready" ...
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472510 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472520 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.472529 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472837 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.509367 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.509402 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.509711 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.509732 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.509736 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.829452 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829478 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829880 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.829909 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.829920 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829931 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829969 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831466 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831574 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.831592 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.832957 2245195 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 14:16:00.135640 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136179 2246921 main.go:141] libmachine: (enable-default-cni-793608) found domain IP: 192.168.61.51
	I0414 14:16:00.136218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has current primary IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136228 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserving static IP address...
	I0414 14:16:00.136619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-793608", mac: "52:54:00:17:5c:90", ip: "192.168.61.51"} in network mk-enable-default-cni-793608
	I0414 14:16:00.222763 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserved static IP address 192.168.61.51 for domain enable-default-cni-793608
	I0414 14:16:00.222799 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Getting to WaitForSSH function...
	I0414 14:16:00.222807 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for SSH...
	I0414 14:16:00.225129 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225617 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.225648 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225770 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH client type: external
	I0414 14:16:00.225797 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa (-rw-------)
	I0414 14:16:00.225856 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:16:00.225876 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | About to run SSH command:
	I0414 14:16:00.225885 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | exit 0
	I0414 14:16:00.349424 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:16:00.349710 2246921 main.go:141] libmachine: (enable-default-cni-793608) KVM machine creation complete
	I0414 14:16:00.350094 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:00.350758 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.350973 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.351171 2246921 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:16:00.351186 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:00.352474 2246921 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:16:00.352489 2246921 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:16:00.352495 2246921 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:16:00.352501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.354605 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355001 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.355029 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355171 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.355341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355496 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355665 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.355853 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.356079 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.356090 2246921 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:16:00.456380 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.456429 2246921 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:16:00.456438 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.460571 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.461175 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461350 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.461649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461843 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461993 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.462152 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.462352 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.462363 2246921 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:16:00.565817 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:16:00.565933 2246921 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:16:00.565955 2246921 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:16:00.565967 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566215 2246921 buildroot.go:166] provisioning hostname "enable-default-cni-793608"
	I0414 14:16:00.566248 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.569565 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570007 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.570036 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570148 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.570313 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.570830 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.571038 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.571050 2246921 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-793608 && echo "enable-default-cni-793608" | sudo tee /etc/hostname
	I0414 14:16:00.692563 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-793608
	
	I0414 14:16:00.692608 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.695656 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.695992 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.696018 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.696190 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.696382 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696618 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.696827 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.697070 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.697097 2246921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:16:00.806026 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.806057 2246921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:16:00.806076 2246921 buildroot.go:174] setting up certificates
	I0414 14:16:00.806087 2246921 provision.go:84] configureAuth start
	I0414 14:16:00.806096 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.806436 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:00.809322 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.809771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809895 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.812367 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.812771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812939 2246921 provision.go:143] copyHostCerts
	I0414 14:16:00.812997 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:16:00.813016 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:16:00.813075 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:16:00.813177 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:16:00.813185 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:16:00.813204 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:16:00.813273 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:16:00.813281 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:16:00.813298 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:16:00.813356 2246921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-793608 san=[127.0.0.1 192.168.61.51 enable-default-cni-793608 localhost minikube]
	I0414 14:16:00.907159 2246921 provision.go:177] copyRemoteCerts
	I0414 14:16:00.907230 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:16:00.907255 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.909912 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910303 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.910362 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910514 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.910722 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.910890 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.911056 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:00.991599 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:16:01.015103 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:16:01.038414 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:16:01.061551 2246921 provision.go:87] duration metric: took 255.446538ms to configureAuth
	I0414 14:16:01.061589 2246921 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:16:01.061847 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:01.061953 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.064789 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065216 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.065256 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065409 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.065624 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065779 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065922 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.066067 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.066371 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.066394 2246921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:16:01.298967 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:16:01.299001 2246921 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:16:01.299010 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetURL
	I0414 14:16:01.300270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | using libvirt version 6000000
	I0414 14:16:01.302669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.303193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303319 2246921 main.go:141] libmachine: Docker is up and running!
	I0414 14:16:01.303335 2246921 main.go:141] libmachine: Reticulating splines...
	I0414 14:16:01.303344 2246921 client.go:171] duration metric: took 25.3946292s to LocalClient.Create
	I0414 14:16:01.303368 2246921 start.go:167] duration metric: took 25.394704554s to libmachine.API.Create "enable-default-cni-793608"
	I0414 14:16:01.303379 2246921 start.go:293] postStartSetup for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:16:01.303391 2246921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:16:01.303418 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.303684 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:16:01.303712 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.305963 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306296 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.306333 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306447 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.306611 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.306757 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.306883 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.391656 2246921 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:16:01.396053 2246921 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:16:01.396081 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:16:01.396141 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:16:01.396212 2246921 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:16:01.396298 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:16:01.406179 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:01.431109 2246921 start.go:296] duration metric: took 127.714931ms for postStartSetup
	I0414 14:16:01.431163 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:01.431902 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.434569 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.434922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.434956 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.435258 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:16:01.435452 2246921 start.go:128] duration metric: took 25.549370799s to createHost
	I0414 14:16:01.435475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.437807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438150 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.438170 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438300 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.438475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438685 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438882 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.439043 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.439232 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.439248 2246921 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:16:01.543192 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640161.512131339
	
	I0414 14:16:01.543222 2246921 fix.go:216] guest clock: 1744640161.512131339
	I0414 14:16:01.543232 2246921 fix.go:229] Guest: 2025-04-14 14:16:01.512131339 +0000 UTC Remote: 2025-04-14 14:16:01.435464689 +0000 UTC m=+29.759982396 (delta=76.66665ms)
	I0414 14:16:01.543257 2246921 fix.go:200] guest clock delta is within tolerance: 76.66665ms
	I0414 14:16:01.543264 2246921 start.go:83] releasing machines lock for "enable-default-cni-793608", held for 25.657434721s
	I0414 14:16:01.543289 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.543595 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.546776 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547177 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.547209 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547370 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.547937 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548127 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548243 2246921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:16:01.548294 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.548390 2246921 ssh_runner.go:195] Run: cat /version.json
	I0414 14:16:01.548429 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.551187 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551441 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551651 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551769 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.551902 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551943 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.552007 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552128 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.552233 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552436 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.552518 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552664 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.626735 2246921 ssh_runner.go:195] Run: systemctl --version
	I0414 14:16:01.656365 2246921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:16:01.812225 2246921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:16:01.819633 2246921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:16:01.819716 2246921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:16:01.841839 2246921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:16:01.841866 2246921 start.go:495] detecting cgroup driver to use...
	I0414 14:16:01.841952 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:16:01.857973 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:16:01.876392 2246921 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:16:01.876465 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:16:01.890055 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:16:01.903801 2246921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:16:02.017060 2246921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:16:02.157678 2246921 docker.go:233] disabling docker service ...
	I0414 14:16:02.157771 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:16:02.172664 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:16:02.187082 2246921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:16:02.331112 2246921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:16:02.472406 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:16:02.489418 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:16:02.510696 2246921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:16:02.510773 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.523647 2246921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:16:02.523745 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.535466 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.546736 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.559297 2246921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:16:02.571500 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.583906 2246921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.602844 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.615974 2246921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:16:02.628273 2246921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:16:02.628364 2246921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:16:02.643490 2246921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:16:02.654314 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:02.785718 2246921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:16:02.885394 2246921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:16:02.885481 2246921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:16:02.890584 2246921 start.go:563] Will wait 60s for crictl version
	I0414 14:16:02.890644 2246921 ssh_runner.go:195] Run: which crictl
	I0414 14:16:02.894771 2246921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:16:02.944686 2246921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:16:02.944817 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:02.977319 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:03.011026 2246921 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:59.833954 2245195 addons.go:514] duration metric: took 1.193578801s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 14:15:59.976226 2245195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-793608" context rescaled to 1 replicas
	I0414 14:16:01.475533 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.476866 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.011997 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:03.014857 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015311 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:03.015340 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015594 2246921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 14:16:03.020865 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:03.036489 2246921 kubeadm.go:883] updating cluster {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enab
le-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:16:03.036649 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:16:03.036718 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:03.074619 2246921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:16:03.074721 2246921 ssh_runner.go:195] Run: which lz4
	I0414 14:16:03.079439 2246921 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:16:03.084705 2246921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:16:03.084757 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:16:04.551650 2246921 crio.go:462] duration metric: took 1.472256374s to copy over tarball
	I0414 14:16:04.551756 2246921 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:16:05.975760 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:08.138018 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:06.821676 2246921 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269870769s)
	I0414 14:16:06.821713 2246921 crio.go:469] duration metric: took 2.270028033s to extract the tarball
	I0414 14:16:06.821725 2246921 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:16:06.862078 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:06.905635 2246921 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:16:06.905661 2246921 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:16:06.905669 2246921 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.32.2 crio true true} ...
	I0414 14:16:06.905814 2246921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 14:16:06.905913 2246921 ssh_runner.go:195] Run: crio config
	I0414 14:16:06.967144 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:06.967177 2246921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:16:06.967207 2246921 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-793608 NodeName:enable-default-cni-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:16:06.967367 2246921 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:16:06.967440 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:16:06.979475 2246921 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:16:06.979549 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:16:06.989632 2246921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0414 14:16:07.006974 2246921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:16:07.022847 2246921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0414 14:16:07.039334 2246921 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0414 14:16:07.044243 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:07.057149 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:07.178687 2246921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:16:07.197629 2246921 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608 for IP: 192.168.61.51
	I0414 14:16:07.197660 2246921 certs.go:194] generating shared ca certs ...
	I0414 14:16:07.197685 2246921 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.197885 2246921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:16:07.197942 2246921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:16:07.197956 2246921 certs.go:256] generating profile certs ...
	I0414 14:16:07.198029 2246921 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key
	I0414 14:16:07.198048 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt with IP's: []
	I0414 14:16:07.570874 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt ...
	I0414 14:16:07.570904 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: {Name:mk64c63d6e720c22aec573b6c12aa4a432b22501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571092 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key ...
	I0414 14:16:07.571109 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key: {Name:mk0c2d9a7feb9ede0f0a997f4aa74d9da8bd11d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571225 2246921 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3
	I0414 14:16:07.571249 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.51]
	I0414 14:16:07.814982 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 ...
	I0414 14:16:07.815014 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3: {Name:mkeadb0ce7226e84070b03ee54954b097e65052a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815181 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 ...
	I0414 14:16:07.815199 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3: {Name:mk35e329e7bcce4cbc7bc648e6d4baaf541bedca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815273 2246921 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt
	I0414 14:16:07.815343 2246921 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key
	I0414 14:16:07.838493 2246921 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key
	I0414 14:16:07.838529 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt with IP's: []
	I0414 14:16:08.294087 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt ...
	I0414 14:16:08.294124 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt: {Name:mk366e930f55c71d9e0d1a041fc8658466e0adca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348261 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key ...
	I0414 14:16:08.348306 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key: {Name:mk319b3ead18f415068eabdc65c4b137c462dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348591 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:16:08.348644 2246921 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:16:08.348659 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:16:08.348693 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:16:08.348724 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:16:08.348775 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:16:08.348827 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:08.349593 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:16:08.435452 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:16:08.462331 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:16:08.492377 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:16:08.517463 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:16:08.584048 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:16:08.609810 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:16:08.634266 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:16:08.663003 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:16:08.688663 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:16:08.713403 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:16:08.736962 2246921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:16:08.754353 2246921 ssh_runner.go:195] Run: openssl version
	I0414 14:16:08.760345 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:16:08.773588 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789050 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789138 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.801556 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:16:08.818825 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:16:08.835651 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841380 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841444 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.847453 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:16:08.859009 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:16:08.871527 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877272 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877350 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.883496 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:16:08.895900 2246921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:16:08.900786 2246921 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:16:08.900847 2246921 kubeadm.go:392] StartCluster: {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-
default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:16:08.900953 2246921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:16:08.901017 2246921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:16:08.943988 2246921 cri.go:89] found id: ""
	I0414 14:16:08.944083 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:16:08.955727 2246921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:16:08.967585 2246921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:16:08.978749 2246921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:16:08.978778 2246921 kubeadm.go:157] found existing configuration files:
	
	I0414 14:16:08.978835 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:16:08.989765 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:16:08.989846 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:16:09.000464 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:16:09.011408 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:16:09.011475 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:16:09.022110 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.032105 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:16:09.032178 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.044673 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:16:09.056844 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:16:09.056918 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:16:09.069647 2246921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:16:09.269121 2246921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:10.474671 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:10.979638 2245195 node_ready.go:49] node "flannel-793608" has status "Ready":"True"
	I0414 14:16:10.979667 2245195 node_ready.go:38] duration metric: took 11.50763178s for node "flannel-793608" to be "Ready" ...
	I0414 14:16:10.979680 2245195 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:10.994987 2245195 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:13.001584 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:15.501069 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:18.002171 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:19.808099 2246921 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:16:19.808186 2246921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:16:19.808295 2246921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:16:19.808429 2246921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:16:19.808568 2246921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:16:19.808676 2246921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:16:19.810130 2246921 out.go:235]   - Generating certificates and keys ...
	I0414 14:16:19.810238 2246921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:16:19.810298 2246921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:16:19.810365 2246921 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:16:19.810414 2246921 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:16:19.810470 2246921 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:16:19.810534 2246921 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:16:19.810597 2246921 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:16:19.810700 2246921 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810746 2246921 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:16:19.810861 2246921 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810922 2246921 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:16:19.810976 2246921 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:16:19.811019 2246921 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:16:19.811063 2246921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:16:19.811110 2246921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:16:19.811178 2246921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:16:19.811247 2246921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:16:19.811315 2246921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:16:19.811416 2246921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:16:19.811560 2246921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:16:19.811693 2246921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:16:19.813222 2246921 out.go:235]   - Booting up control plane ...
	I0414 14:16:19.813343 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:16:19.813423 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:16:19.813517 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:16:19.813626 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:16:19.813707 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:16:19.813744 2246921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:16:19.813927 2246921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:16:19.814039 2246921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:16:19.814093 2246921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.194914ms
	I0414 14:16:19.814160 2246921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:16:19.814211 2246921 kubeadm.go:310] [api-check] The API server is healthy after 5.003151438s
	I0414 14:16:19.814310 2246921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:16:19.814464 2246921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:16:19.814520 2246921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:16:19.814781 2246921 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:16:19.814844 2246921 kubeadm.go:310] [bootstrap-token] Using token: 3eizlo.lt0uyxdkcw3v7pf4
	I0414 14:16:19.816206 2246921 out.go:235]   - Configuring RBAC rules ...
	I0414 14:16:19.816316 2246921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:16:19.816416 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:16:19.816635 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:16:19.816797 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:16:19.816931 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:16:19.817040 2246921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:16:19.817207 2246921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:16:19.817272 2246921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:16:19.817346 2246921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:16:19.817355 2246921 kubeadm.go:310] 
	I0414 14:16:19.817449 2246921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:16:19.817464 2246921 kubeadm.go:310] 
	I0414 14:16:19.817567 2246921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:16:19.817574 2246921 kubeadm.go:310] 
	I0414 14:16:19.817595 2246921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:16:19.817645 2246921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:16:19.817714 2246921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:16:19.817721 2246921 kubeadm.go:310] 
	I0414 14:16:19.817782 2246921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:16:19.817791 2246921 kubeadm.go:310] 
	I0414 14:16:19.817831 2246921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:16:19.817850 2246921 kubeadm.go:310] 
	I0414 14:16:19.817913 2246921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:16:19.818015 2246921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:16:19.818135 2246921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:16:19.818154 2246921 kubeadm.go:310] 
	I0414 14:16:19.818285 2246921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:16:19.818379 2246921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:16:19.818388 2246921 kubeadm.go:310] 
	I0414 14:16:19.818499 2246921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.818642 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:16:19.818667 2246921 kubeadm.go:310] 	--control-plane 
	I0414 14:16:19.818671 2246921 kubeadm.go:310] 
	I0414 14:16:19.818846 2246921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:16:19.818859 2246921 kubeadm.go:310] 
	I0414 14:16:19.818924 2246921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.819079 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:16:19.819111 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:19.820701 2246921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 14:16:19.822064 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:16:19.833700 2246921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:16:19.853878 2246921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:16:19.853933 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:19.853982 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-793608 minikube.k8s.io/updated_at=2025_04_14T14_16_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=enable-default-cni-793608 minikube.k8s.io/primary=true
	I0414 14:16:19.982063 2246921 ops.go:34] apiserver oom_adj: -16
	I0414 14:16:19.982081 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.483212 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.983097 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.482224 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.982202 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.483188 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.982274 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.483138 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.982281 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:24.127347 2246921 kubeadm.go:1113] duration metric: took 4.273479771s to wait for elevateKubeSystemPrivileges
	I0414 14:16:24.127397 2246921 kubeadm.go:394] duration metric: took 15.226555734s to StartCluster
	I0414 14:16:24.127425 2246921 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.127515 2246921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:16:24.128586 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.128872 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:16:24.128877 2246921 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:16:24.128973 2246921 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:16:24.129079 2246921 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129102 2246921 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-793608"
	I0414 14:16:24.129137 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.129191 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:24.129134 2246921 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129295 2246921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-793608"
	I0414 14:16:24.129659 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129708 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.129784 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129837 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.130608 2246921 out.go:177] * Verifying Kubernetes components...
	I0414 14:16:24.132086 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:24.146823 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0414 14:16:24.147436 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.147995 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.148018 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.148365 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.148957 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.149005 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.150594 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0414 14:16:24.151027 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.151504 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.151528 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.151980 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.152177 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.156031 2246921 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-793608"
	I0414 14:16:24.156084 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.156451 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.156492 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.166981 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0414 14:16:24.167563 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.168160 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.168184 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.168575 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.168767 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.170740 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:24.172584 2246921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:16:20.501644 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:22.501757 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:24.130339 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:16:24.130631 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:16:24.130653 2235858 kubeadm.go:310] 
	I0414 14:16:24.130704 2235858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:16:24.130779 2235858 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:16:24.130797 2235858 kubeadm.go:310] 
	I0414 14:16:24.130844 2235858 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:16:24.130904 2235858 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:16:24.131056 2235858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:16:24.131075 2235858 kubeadm.go:310] 
	I0414 14:16:24.131212 2235858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:16:24.131254 2235858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:16:24.131293 2235858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:16:24.131299 2235858 kubeadm.go:310] 
	I0414 14:16:24.131421 2235858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:16:24.131520 2235858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:16:24.131528 2235858 kubeadm.go:310] 
	I0414 14:16:24.131660 2235858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:16:24.131767 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:16:24.131853 2235858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:16:24.131938 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:16:24.131946 2235858 kubeadm.go:310] 
	I0414 14:16:24.133108 2235858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:24.133245 2235858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:16:24.133343 2235858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:16:24.133446 2235858 kubeadm.go:394] duration metric: took 8m0.052385423s to StartCluster
	I0414 14:16:24.133512 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:16:24.133587 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:16:24.199915 2235858 cri.go:89] found id: ""
	I0414 14:16:24.199946 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.199956 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:16:24.199965 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:16:24.200032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:16:24.247368 2235858 cri.go:89] found id: ""
	I0414 14:16:24.247407 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.247418 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:16:24.247427 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:16:24.247496 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:16:24.288565 2235858 cri.go:89] found id: ""
	I0414 14:16:24.288598 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.288610 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:16:24.288618 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:16:24.288687 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:16:24.329531 2235858 cri.go:89] found id: ""
	I0414 14:16:24.329568 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.329581 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:16:24.329591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:16:24.329663 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:16:24.372326 2235858 cri.go:89] found id: ""
	I0414 14:16:24.372361 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.372370 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:16:24.372376 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:16:24.372447 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:16:24.423414 2235858 cri.go:89] found id: ""
	I0414 14:16:24.423447 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.423460 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:16:24.423469 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:16:24.423534 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:16:24.464828 2235858 cri.go:89] found id: ""
	I0414 14:16:24.464869 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.464882 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:16:24.464890 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:16:24.464970 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:16:24.505791 2235858 cri.go:89] found id: ""
	I0414 14:16:24.505820 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.505830 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:16:24.505844 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:16:24.505860 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:16:24.571908 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:16:24.571951 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:16:24.589579 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:16:24.589614 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:16:24.680606 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:16:24.680637 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:16:24.680659 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:16:24.800813 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:16:24.800859 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 14:16:24.849704 2235858 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:16:24.849777 2235858 out.go:270] * 
	W0414 14:16:24.849842 2235858 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.849868 2235858 out.go:270] * 
	W0414 14:16:24.851036 2235858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:16:24.854829 2235858 out.go:201] 
	W0414 14:16:24.856198 2235858 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.856246 2235858 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:16:24.856269 2235858 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:16:24.857740 2235858 out.go:201] 
	I0414 14:16:24.173925 2246921 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:16:24.173948 2246921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:16:24.173970 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:24.176982 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.177524 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:24.177544 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.177698 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:24.177872 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:24.178021 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:24.178136 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:24.178979 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I0414 14:16:24.179319 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.179745 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.179764 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.180045 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.180622 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.180659 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.200932 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0414 14:16:24.201524 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.202218 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.202248 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.202575 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.202815 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.204228 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:24.204442 2246921 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:16:24.204458 2246921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:16:24.204476 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:24.207373 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.207818 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:24.207841 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.207987 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:24.208140 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:24.208270 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:24.208396 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:24.524660 2246921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:16:24.524689 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:16:24.614615 2246921 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-793608" to be "Ready" ...
	I0414 14:16:24.623835 2246921 node_ready.go:49] node "enable-default-cni-793608" has status "Ready":"True"
	I0414 14:16:24.623859 2246921 node_ready.go:38] duration metric: took 9.186236ms for node "enable-default-cni-793608" to be "Ready" ...
	I0414 14:16:24.623871 2246921 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:24.633247 2246921 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:24.697336 2246921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:16:24.705511 2246921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:16:25.562908 2246921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.038193704s)
	I0414 14:16:25.562944 2246921 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 14:16:25.563020 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.563047 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.563371 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.563384 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:25.563393 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.563400 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.563838 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:25.563905 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.563929 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:25.594180 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.594204 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.594584 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.594592 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:25.594607 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.077943 2246921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-793608" context rescaled to 1 replicas
	I0414 14:16:26.100977 2246921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395415307s)
	I0414 14:16:26.101044 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:26.101056 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:26.101405 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:26.101421 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.101430 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:26.101438 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:26.101450 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:26.101672 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:26.101713 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:26.101726 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.103632 2246921 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 14:16:25.005086 2245195 pod_ready.go:93] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.005120 2245195 pod_ready.go:82] duration metric: took 14.010099956s for pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.005134 2245195 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.013539 2245195 pod_ready.go:93] pod "etcd-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.013565 2245195 pod_ready.go:82] duration metric: took 8.422542ms for pod "etcd-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.013579 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.021742 2245195 pod_ready.go:93] pod "kube-apiserver-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.021769 2245195 pod_ready.go:82] duration metric: took 8.182307ms for pod "kube-apiserver-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.021783 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.028866 2245195 pod_ready.go:93] pod "kube-controller-manager-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.028895 2245195 pod_ready.go:82] duration metric: took 7.104091ms for pod "kube-controller-manager-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.028917 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-l2wdq" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.038885 2245195 pod_ready.go:93] pod "kube-proxy-l2wdq" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.038913 2245195 pod_ready.go:82] duration metric: took 9.98732ms for pod "kube-proxy-l2wdq" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.038926 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.399809 2245195 pod_ready.go:93] pod "kube-scheduler-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.399834 2245195 pod_ready.go:82] duration metric: took 360.900191ms for pod "kube-scheduler-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.399846 2245195 pod_ready.go:39] duration metric: took 14.420128309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:25.399864 2245195 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:16:25.399918 2245195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:16:25.417850 2245195 api_server.go:72] duration metric: took 26.777514906s to wait for apiserver process to appear ...
	I0414 14:16:25.417883 2245195 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:16:25.417903 2245195 api_server.go:253] Checking apiserver healthz at https://192.168.72.179:8443/healthz ...
	I0414 14:16:25.424022 2245195 api_server.go:279] https://192.168.72.179:8443/healthz returned 200:
	ok
	I0414 14:16:25.425022 2245195 api_server.go:141] control plane version: v1.32.2
	I0414 14:16:25.425045 2245195 api_server.go:131] duration metric: took 7.153666ms to wait for apiserver health ...
	I0414 14:16:25.425055 2245195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:16:25.602605 2245195 system_pods.go:59] 7 kube-system pods found
	I0414 14:16:25.602662 2245195 system_pods.go:61] "coredns-668d6bf9bc-hts2b" [f50a2820-c4ea-48f9-af3f-66436de96f27] Running
	I0414 14:16:25.602673 2245195 system_pods.go:61] "etcd-flannel-793608" [0b5f8d64-b6c5-4c4c-9b48-b85697bb07b2] Running
	I0414 14:16:25.602680 2245195 system_pods.go:61] "kube-apiserver-flannel-793608" [5576dc53-7585-4a6b-bb8e-c42042292362] Running
	I0414 14:16:25.602688 2245195 system_pods.go:61] "kube-controller-manager-flannel-793608" [9d76aa30-9b55-48da-a5dd-cedc72aa8ce1] Running
	I0414 14:16:25.602703 2245195 system_pods.go:61] "kube-proxy-l2wdq" [da2a410f-f489-4449-b993-b45c7b21f670] Running
	I0414 14:16:25.602710 2245195 system_pods.go:61] "kube-scheduler-flannel-793608" [2f7bc4a7-326d-4068-b695-5c875b074669] Running
	I0414 14:16:25.602726 2245195 system_pods.go:61] "storage-provisioner" [ae27af3a-026b-498c-b411-2b7089e276bf] Running
	I0414 14:16:25.602736 2245195 system_pods.go:74] duration metric: took 177.67285ms to wait for pod list to return data ...
	I0414 14:16:25.602753 2245195 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:16:25.800258 2245195 default_sa.go:45] found service account: "default"
	I0414 14:16:25.800293 2245195 default_sa.go:55] duration metric: took 197.529406ms for default service account to be created ...
	I0414 14:16:25.800304 2245195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:16:26.000548 2245195 system_pods.go:86] 7 kube-system pods found
	I0414 14:16:26.000595 2245195 system_pods.go:89] "coredns-668d6bf9bc-hts2b" [f50a2820-c4ea-48f9-af3f-66436de96f27] Running
	I0414 14:16:26.000605 2245195 system_pods.go:89] "etcd-flannel-793608" [0b5f8d64-b6c5-4c4c-9b48-b85697bb07b2] Running
	I0414 14:16:26.000612 2245195 system_pods.go:89] "kube-apiserver-flannel-793608" [5576dc53-7585-4a6b-bb8e-c42042292362] Running
	I0414 14:16:26.000619 2245195 system_pods.go:89] "kube-controller-manager-flannel-793608" [9d76aa30-9b55-48da-a5dd-cedc72aa8ce1] Running
	I0414 14:16:26.000625 2245195 system_pods.go:89] "kube-proxy-l2wdq" [da2a410f-f489-4449-b993-b45c7b21f670] Running
	I0414 14:16:26.000631 2245195 system_pods.go:89] "kube-scheduler-flannel-793608" [2f7bc4a7-326d-4068-b695-5c875b074669] Running
	I0414 14:16:26.000637 2245195 system_pods.go:89] "storage-provisioner" [ae27af3a-026b-498c-b411-2b7089e276bf] Running
	I0414 14:16:26.000650 2245195 system_pods.go:126] duration metric: took 200.337178ms to wait for k8s-apps to be running ...
	I0414 14:16:26.000661 2245195 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:16:26.000754 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:16:26.019676 2245195 system_svc.go:56] duration metric: took 19.001248ms WaitForService to wait for kubelet
	I0414 14:16:26.019718 2245195 kubeadm.go:582] duration metric: took 27.379387997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:16:26.019745 2245195 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:16:26.200273 2245195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:16:26.200312 2245195 node_conditions.go:123] node cpu capacity is 2
	I0414 14:16:26.200331 2245195 node_conditions.go:105] duration metric: took 180.579715ms to run NodePressure ...
	I0414 14:16:26.200347 2245195 start.go:241] waiting for startup goroutines ...
	I0414 14:16:26.200357 2245195 start.go:246] waiting for cluster config update ...
	I0414 14:16:26.200371 2245195 start.go:255] writing updated cluster config ...
	I0414 14:16:26.200750 2245195 ssh_runner.go:195] Run: rm -f paused
	I0414 14:16:26.255354 2245195 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:16:26.257685 2245195 out.go:177] * Done! kubectl is now configured to use "flannel-793608" cluster and "default" namespace by default
	I0414 14:16:26.104814 2246921 addons.go:514] duration metric: took 1.975843927s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 14:16:26.639710 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:28.640070 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:31.138255 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:33.139626 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:35.639235 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:36.638566 2246921 pod_ready.go:98] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:36 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.51 HostIPs:[{IP:192.168.61.
51}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 14:16:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 14:16:26 +0000 UTC,FinishedAt:2025-04-14 14:16:36 +0000 UTC,ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152 Started:0xc00167d900 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ddfa60} {Name:kube-api-access-vl56x MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001ddfa90}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 14:16:36.638594 2246921 pod_ready.go:82] duration metric: took 12.005319047s for pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace to be "Ready" ...
	E0414 14:16:36.638605 2246921 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:36 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.51 HostIPs:[{IP:192.168.61.51}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 14:16:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 14:16:26 +0000 UTC,FinishedAt:2025-04-14 14:16:36 +0000 UTC,ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152 Started:0xc00167d900 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ddfa60} {Name:kube-api-access-vl56x MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001ddfa90}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 14:16:36.638622 2246921 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:38.644543 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:41.144440 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:43.644216 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:46.143915 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:48.144952 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:50.645522 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:53.144897 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:55.145252 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:57.644328 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:17:00.145274 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:17:02.151799 2246921 pod_ready.go:93] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.151822 2246921 pod_ready.go:82] duration metric: took 25.513193873s for pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.151833 2246921 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.156058 2246921 pod_ready.go:93] pod "etcd-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.156076 2246921 pod_ready.go:82] duration metric: took 4.237594ms for pod "etcd-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.156085 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.159224 2246921 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.159240 2246921 pod_ready.go:82] duration metric: took 3.150225ms for pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.159250 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.162485 2246921 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.162505 2246921 pod_ready.go:82] duration metric: took 3.248888ms for pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.162513 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-ztqkc" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.165500 2246921 pod_ready.go:93] pod "kube-proxy-ztqkc" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.165515 2246921 pod_ready.go:82] duration metric: took 2.997241ms for pod "kube-proxy-ztqkc" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.165524 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.542630 2246921 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.542653 2246921 pod_ready.go:82] duration metric: took 377.123651ms for pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.542661 2246921 pod_ready.go:39] duration metric: took 37.918773646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:17:02.542677 2246921 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:17:02.542724 2246921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:17:02.558059 2246921 api_server.go:72] duration metric: took 38.429144648s to wait for apiserver process to appear ...
	I0414 14:17:02.558091 2246921 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:17:02.558115 2246921 api_server.go:253] Checking apiserver healthz at https://192.168.61.51:8443/healthz ...
	I0414 14:17:02.562804 2246921 api_server.go:279] https://192.168.61.51:8443/healthz returned 200:
	ok
	I0414 14:17:02.563889 2246921 api_server.go:141] control plane version: v1.32.2
	I0414 14:17:02.563911 2246921 api_server.go:131] duration metric: took 5.813659ms to wait for apiserver health ...
	I0414 14:17:02.563919 2246921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:17:02.745213 2246921 system_pods.go:59] 7 kube-system pods found
	I0414 14:17:02.745247 2246921 system_pods.go:61] "coredns-668d6bf9bc-jbt4j" [b142d5d4-3ab2-450b-8396-cafb1d00b2a3] Running
	I0414 14:17:02.745252 2246921 system_pods.go:61] "etcd-enable-default-cni-793608" [6aced462-ff90-40c9-b55c-3217fb8d2cfb] Running
	I0414 14:17:02.745257 2246921 system_pods.go:61] "kube-apiserver-enable-default-cni-793608" [57eb96df-a3c9-4e5e-b3f4-03cbaf559917] Running
	I0414 14:17:02.745261 2246921 system_pods.go:61] "kube-controller-manager-enable-default-cni-793608" [e5d364e8-3779-4cf7-ac59-54cbe5bc055d] Running
	I0414 14:17:02.745265 2246921 system_pods.go:61] "kube-proxy-ztqkc" [4a64fc36-d13b-4c5c-9bf1-17dd88ef4d34] Running
	I0414 14:17:02.745268 2246921 system_pods.go:61] "kube-scheduler-enable-default-cni-793608" [95e190b4-7390-4388-85eb-85157648e866] Running
	I0414 14:17:02.745271 2246921 system_pods.go:61] "storage-provisioner" [8666788b-504e-4f32-8dd5-c4da6070f943] Running
	I0414 14:17:02.745278 2246921 system_pods.go:74] duration metric: took 181.352893ms to wait for pod list to return data ...
	I0414 14:17:02.745285 2246921 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:17:02.942681 2246921 default_sa.go:45] found service account: "default"
	I0414 14:17:02.942711 2246921 default_sa.go:55] duration metric: took 197.418865ms for default service account to be created ...
	I0414 14:17:02.942721 2246921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:17:03.144283 2246921 system_pods.go:86] 7 kube-system pods found
	I0414 14:17:03.144315 2246921 system_pods.go:89] "coredns-668d6bf9bc-jbt4j" [b142d5d4-3ab2-450b-8396-cafb1d00b2a3] Running
	I0414 14:17:03.144320 2246921 system_pods.go:89] "etcd-enable-default-cni-793608" [6aced462-ff90-40c9-b55c-3217fb8d2cfb] Running
	I0414 14:17:03.144324 2246921 system_pods.go:89] "kube-apiserver-enable-default-cni-793608" [57eb96df-a3c9-4e5e-b3f4-03cbaf559917] Running
	I0414 14:17:03.144329 2246921 system_pods.go:89] "kube-controller-manager-enable-default-cni-793608" [e5d364e8-3779-4cf7-ac59-54cbe5bc055d] Running
	I0414 14:17:03.144332 2246921 system_pods.go:89] "kube-proxy-ztqkc" [4a64fc36-d13b-4c5c-9bf1-17dd88ef4d34] Running
	I0414 14:17:03.144336 2246921 system_pods.go:89] "kube-scheduler-enable-default-cni-793608" [95e190b4-7390-4388-85eb-85157648e866] Running
	I0414 14:17:03.144339 2246921 system_pods.go:89] "storage-provisioner" [8666788b-504e-4f32-8dd5-c4da6070f943] Running
	I0414 14:17:03.144348 2246921 system_pods.go:126] duration metric: took 201.619782ms to wait for k8s-apps to be running ...
	I0414 14:17:03.144358 2246921 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:17:03.144414 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:17:03.162718 2246921 system_svc.go:56] duration metric: took 18.34957ms WaitForService to wait for kubelet
	I0414 14:17:03.162746 2246921 kubeadm.go:582] duration metric: took 39.033839006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:17:03.162764 2246921 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:17:03.346739 2246921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:17:03.346770 2246921 node_conditions.go:123] node cpu capacity is 2
	I0414 14:17:03.346784 2246921 node_conditions.go:105] duration metric: took 184.014842ms to run NodePressure ...
	I0414 14:17:03.346796 2246921 start.go:241] waiting for startup goroutines ...
	I0414 14:17:03.346803 2246921 start.go:246] waiting for cluster config update ...
	I0414 14:17:03.346813 2246921 start.go:255] writing updated cluster config ...
	I0414 14:17:03.347081 2246921 ssh_runner.go:195] Run: rm -f paused
	I0414 14:17:03.396319 2246921 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:17:03.399139 2246921 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-793608" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.556506374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640727556477476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41de75c5-acff-487f-81f9-9102f83f44a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.557330668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c6899e6-99bf-42e6-b039-f42bdb074e72 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.557455459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c6899e6-99bf-42e6-b039-f42bdb074e72 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.557514182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3c6899e6-99bf-42e6-b039-f42bdb074e72 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.593608683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=620068b5-9a69-4ae1-8270-ba424625c358 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.593734487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=620068b5-9a69-4ae1-8270-ba424625c358 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.595157611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9afdf04f-5e20-4060-94da-479275b502ab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.595661057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640727595635105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9afdf04f-5e20-4060-94da-479275b502ab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.596349798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2869fd80-502f-4e1e-a8f6-d845890aad5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.596442651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2869fd80-502f-4e1e-a8f6-d845890aad5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.596515343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2869fd80-502f-4e1e-a8f6-d845890aad5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.633190604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2afb6a3f-62fb-49fe-8761-38fa7f2e72cf name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.633313116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2afb6a3f-62fb-49fe-8761-38fa7f2e72cf name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.634582789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d199b6e-9f3c-4559-9fc2-3d21991d59b7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.635135369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640727635103909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d199b6e-9f3c-4559-9fc2-3d21991d59b7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.635839084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e47e9dc-6464-4f24-9d96-991906e8959f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.635919470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e47e9dc-6464-4f24-9d96-991906e8959f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.635979749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e47e9dc-6464-4f24-9d96-991906e8959f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.672610185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6961893c-8d42-4b3d-9a2f-cbdb9e57b477 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.672718488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6961893c-8d42-4b3d-9a2f-cbdb9e57b477 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.674382134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7611f657-1d9a-4bab-89a9-416c1e2cfe42 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.674856812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640727674824977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7611f657-1d9a-4bab-89a9-416c1e2cfe42 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.675436049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0ba9980-5c5d-4328-a1ee-ab415a6dcde4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.675523160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0ba9980-5c5d-4328-a1ee-ab415a6dcde4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:25:27 old-k8s-version-954411 crio[632]: time="2025-04-14 14:25:27.675588180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b0ba9980-5c5d-4328-a1ee-ab415a6dcde4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 14:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055482] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043064] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr14 14:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.836790] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.084082] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.058063] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072240] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.169515] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.152610] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.265682] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +8.281644] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060503] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.889080] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.358788] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 14:12] systemd-fstab-generator[5015]: Ignoring "noauto" option for root device
	[Apr14 14:14] systemd-fstab-generator[5297]: Ignoring "noauto" option for root device
	[  +0.108430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:25:27 up 17 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux old-k8s-version-954411 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0009d3170)
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: goroutine 156 [select]:
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bb9ef0, 0x4f0ac20, 0xc000b9a960, 0x1, 0xc00009e0c0)
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000960c40, 0xc00009e0c0)
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009abc50, 0xc0009cf780)
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 14:25:25 old-k8s-version-954411 kubelet[6472]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 14:25:25 old-k8s-version-954411 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 14:25:25 old-k8s-version-954411 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 14:25:26 old-k8s-version-954411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 14 14:25:26 old-k8s-version-954411 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 14:25:26 old-k8s-version-954411 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 14:25:26 old-k8s-version-954411 kubelet[6481]: I0414 14:25:26.294337    6481 server.go:416] Version: v1.20.0
	Apr 14 14:25:26 old-k8s-version-954411 kubelet[6481]: I0414 14:25:26.294608    6481 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 14:25:26 old-k8s-version-954411 kubelet[6481]: I0414 14:25:26.296525    6481 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 14:25:26 old-k8s-version-954411 kubelet[6481]: I0414 14:25:26.297456    6481 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 14 14:25:26 old-k8s-version-954411 kubelet[6481]: W0414 14:25:26.297478    6481 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (233.406293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-954411" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (340.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:28.852987 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:32.700002 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:25:54.158523 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:26:26.285590 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:26:53.985950 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:27:03.833015 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:27:25.775207 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:27:31.537155 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:28:22.287991 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/auto-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:28:46.262919 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:28:50.969704 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/kindnet-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:29:34.279886 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/calico-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:29:52.448238 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:30:04.998487 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/custom-flannel-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:30:09.330100 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:30:26.456402 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/bridge-793608/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E0414 14:30:27.986362 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (231.543076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-954411" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-954411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-954411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.133µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-954411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (219.427269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-954411 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608 sudo cat                | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-793608                         | enable-default-cni-793608 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:15:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:15:31.712686 2246921 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:15:31.712831 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.712841 2246921 out.go:358] Setting ErrFile to fd 2...
	I0414 14:15:31.712845 2246921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:15:31.713023 2246921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:15:31.713616 2246921 out.go:352] Setting JSON to false
	I0414 14:15:31.714831 2246921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":169071,"bootTime":1744471061,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:15:31.714947 2246921 start.go:139] virtualization: kvm guest
	I0414 14:15:31.717011 2246921 out.go:177] * [enable-default-cni-793608] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:15:31.718463 2246921 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:15:31.718471 2246921 notify.go:220] Checking for updates...
	I0414 14:15:31.720654 2246921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:15:31.721764 2246921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:31.722980 2246921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:31.724178 2246921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:15:31.725315 2246921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:15:31.727113 2246921 config.go:182] Loaded profile config "bridge-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727265 2246921 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:31.727430 2246921 config.go:182] Loaded profile config "old-k8s-version-954411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:15:31.727563 2246921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:15:31.767165 2246921 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:15:31.768293 2246921 start.go:297] selected driver: kvm2
	I0414 14:15:31.768305 2246921 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:15:31.768317 2246921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:15:31.769036 2246921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.769109 2246921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:15:31.784672 2246921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:15:31.784720 2246921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0414 14:15:31.784990 2246921 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0414 14:15:31.785021 2246921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:15:31.785052 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:15:31.785058 2246921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:15:31.785117 2246921 start.go:340] cluster config:
	{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:31.785199 2246921 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:15:31.786961 2246921 out.go:177] * Starting "enable-default-cni-793608" primary control-plane node in "enable-default-cni-793608" cluster
	I0414 14:15:29.994679 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:29.995209 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find current IP address of domain flannel-793608 in network mk-flannel-793608
	I0414 14:15:29.995234 2245195 main.go:141] libmachine: (flannel-793608) DBG | I0414 14:15:29.995171 2245218 retry.go:31] will retry after 4.26066759s: waiting for domain to come up
	I0414 14:15:34.260693 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261277 2245195 main.go:141] libmachine: (flannel-793608) found domain IP: 192.168.72.179
	I0414 14:15:34.261303 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has current primary IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.261308 2245195 main.go:141] libmachine: (flannel-793608) reserving static IP address...
	I0414 14:15:34.261716 2245195 main.go:141] libmachine: (flannel-793608) DBG | unable to find host DHCP lease matching {name: "flannel-793608", mac: "52:54:00:62:9d:72", ip: "192.168.72.179"} in network mk-flannel-793608
	I0414 14:15:34.346350 2245195 main.go:141] libmachine: (flannel-793608) reserved static IP address 192.168.72.179 for domain flannel-793608
	I0414 14:15:34.346390 2245195 main.go:141] libmachine: (flannel-793608) waiting for SSH...
	I0414 14:15:34.346401 2245195 main.go:141] libmachine: (flannel-793608) DBG | Getting to WaitForSSH function...
	I0414 14:15:34.349135 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.349868 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.349899 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.350052 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH client type: external
	I0414 14:15:34.350078 2245195 main.go:141] libmachine: (flannel-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa (-rw-------)
	I0414 14:15:34.350116 2245195 main.go:141] libmachine: (flannel-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:15:34.350131 2245195 main.go:141] libmachine: (flannel-793608) DBG | About to run SSH command:
	I0414 14:15:34.350150 2245195 main.go:141] libmachine: (flannel-793608) DBG | exit 0
	I0414 14:15:34.484913 2245195 main.go:141] libmachine: (flannel-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:15:34.485227 2245195 main.go:141] libmachine: (flannel-793608) KVM machine creation complete
	I0414 14:15:34.485544 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:34.486221 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486401 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:34.486553 2245195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:15:34.486568 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:34.487978 2245195 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:15:34.487993 2245195 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:15:34.488000 2245195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:15:34.488008 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.490564 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.490891 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.490935 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.491086 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.491262 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491420 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.491570 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.491735 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.491982 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.491998 2245195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:15:34.604245 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.604277 2245195 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:15:34.604289 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.606969 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607364 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.607394 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.607479 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.607712 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.607871 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.608010 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.608176 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.608423 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.608435 2245195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:15:31.788237 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:31.788265 2246921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:15:31.788272 2246921 cache.go:56] Caching tarball of preloaded images
	I0414 14:15:31.788346 2246921 preload.go:172] Found /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:15:31.788355 2246921 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:15:31.788446 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:15:31.788463 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json: {Name:mkf77fb616cb68a05b6b927a1d1b666f496a2e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:31.788580 2246921 start.go:360] acquireMachinesLock for enable-default-cni-793608: {Name:mka8bf7d0904b7ab9a32ecac2c5513c5d5418afd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:15:35.885793 2246921 start.go:364] duration metric: took 4.097174218s to acquireMachinesLock for "enable-default-cni-793608"
	I0414 14:15:35.885866 2246921 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:35.886064 2246921 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:15:35.888060 2246921 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 14:15:35.888295 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:35.888367 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:35.906793 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0414 14:15:35.907218 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:35.907761 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:15:35.907787 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:35.908162 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:35.908377 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:15:35.908506 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:15:35.908667 2246921 start.go:159] libmachine.API.Create for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:15:35.908702 2246921 client.go:168] LocalClient.Create starting
	I0414 14:15:35.908763 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem
	I0414 14:15:35.908804 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908828 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.908911 2246921 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem
	I0414 14:15:35.908946 2246921 main.go:141] libmachine: Decoding PEM data...
	I0414 14:15:35.908967 2246921 main.go:141] libmachine: Parsing certificate...
	I0414 14:15:35.909001 2246921 main.go:141] libmachine: Running pre-create checks...
	I0414 14:15:35.909014 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .PreCreateCheck
	I0414 14:15:35.909444 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:15:35.909876 2246921 main.go:141] libmachine: Creating machine...
	I0414 14:15:35.909891 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Create
	I0414 14:15:35.910047 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating KVM machine...
	I0414 14:15:35.910070 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating network...
	I0414 14:15:35.911361 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found existing default KVM network
	I0414 14:15:35.912285 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912133 2246989 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:97:78} reservation:<nil>}
	I0414 14:15:35.913042 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.912966 2246989 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:99:3f} reservation:<nil>}
	I0414 14:15:35.914019 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:35.913920 2246989 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292a90}
	I0414 14:15:35.914054 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | created network xml: 
	I0414 14:15:35.914078 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | <network>
	I0414 14:15:35.914088 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <name>mk-enable-default-cni-793608</name>
	I0414 14:15:35.914099 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <dns enable='no'/>
	I0414 14:15:35.914108 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914122 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 14:15:35.914132 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     <dhcp>
	I0414 14:15:35.914142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 14:15:35.914154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |     </dhcp>
	I0414 14:15:35.914161 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   </ip>
	I0414 14:15:35.914173 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG |   
	I0414 14:15:35.914184 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | </network>
	I0414 14:15:35.914202 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | 
	I0414 14:15:35.919363 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | trying to create private KVM network mk-enable-default-cni-793608 192.168.61.0/24...
	I0414 14:15:36.004404 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | private KVM network mk-enable-default-cni-793608 192.168.61.0/24 created
	I0414 14:15:36.004444 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting up store path in /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.004472 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.004346 2246989 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.004493 2246921 main.go:141] libmachine: (enable-default-cni-793608) building disk image from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:15:36.004516 2246921 main.go:141] libmachine: (enable-default-cni-793608) Downloading /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:15:36.310781 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.310631 2246989 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa...
	I0414 14:15:36.425010 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.424863 2246989 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk...
	I0414 14:15:36.425050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing magic tar header
	I0414 14:15:36.425064 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Writing SSH key tar header
	I0414 14:15:36.425072 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:36.425023 2246989 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 ...
	I0414 14:15:36.425218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608
	I0414 14:15:36.425260 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608 (perms=drwx------)
	I0414 14:15:36.425270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines
	I0414 14:15:36.425291 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:15:36.425304 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20623-2183077
	I0414 14:15:36.425315 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:15:36.425326 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home/jenkins
	I0414 14:15:36.425339 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:15:36.425360 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077/.minikube (perms=drwxr-xr-x)
	I0414 14:15:36.425371 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration/20623-2183077 (perms=drwxrwxr-x)
	I0414 14:15:36.425379 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | checking permissions on dir: /home
	I0414 14:15:36.425391 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | skipping /home - not owner
	I0414 14:15:36.425401 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:15:36.425420 2246921 main.go:141] libmachine: (enable-default-cni-793608) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:15:36.425433 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:36.426763 2246921 main.go:141] libmachine: (enable-default-cni-793608) define libvirt domain using xml: 
	I0414 14:15:36.426788 2246921 main.go:141] libmachine: (enable-default-cni-793608) <domain type='kvm'>
	I0414 14:15:36.426799 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <name>enable-default-cni-793608</name>
	I0414 14:15:36.426807 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <memory unit='MiB'>3072</memory>
	I0414 14:15:36.426816 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <vcpu>2</vcpu>
	I0414 14:15:36.426832 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <features>
	I0414 14:15:36.426844 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <acpi/>
	I0414 14:15:36.426858 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <apic/>
	I0414 14:15:36.426869 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <pae/>
	I0414 14:15:36.426877 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.426882 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </features>
	I0414 14:15:36.426902 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <cpu mode='host-passthrough'>
	I0414 14:15:36.426909 2246921 main.go:141] libmachine: (enable-default-cni-793608)   
	I0414 14:15:36.426914 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </cpu>
	I0414 14:15:36.426921 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <os>
	I0414 14:15:36.426925 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <type>hvm</type>
	I0414 14:15:36.426963 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='cdrom'/>
	I0414 14:15:36.427000 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <boot dev='hd'/>
	I0414 14:15:36.427014 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <bootmenu enable='no'/>
	I0414 14:15:36.427038 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </os>
	I0414 14:15:36.427051 2246921 main.go:141] libmachine: (enable-default-cni-793608)   <devices>
	I0414 14:15:36.427067 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='cdrom'>
	I0414 14:15:36.427085 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/boot2docker.iso'/>
	I0414 14:15:36.427097 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hdc' bus='scsi'/>
	I0414 14:15:36.427109 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <readonly/>
	I0414 14:15:36.427119 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427129 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <disk type='file' device='disk'>
	I0414 14:15:36.427147 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:15:36.427170 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source file='/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/enable-default-cni-793608.rawdisk'/>
	I0414 14:15:36.427181 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target dev='hda' bus='virtio'/>
	I0414 14:15:36.427194 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </disk>
	I0414 14:15:36.427205 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427218 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='mk-enable-default-cni-793608'/>
	I0414 14:15:36.427231 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427247 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427259 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <interface type='network'>
	I0414 14:15:36.427267 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <source network='default'/>
	I0414 14:15:36.427279 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <model type='virtio'/>
	I0414 14:15:36.427299 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </interface>
	I0414 14:15:36.427311 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <serial type='pty'>
	I0414 14:15:36.427326 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target port='0'/>
	I0414 14:15:36.427338 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </serial>
	I0414 14:15:36.427349 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <console type='pty'>
	I0414 14:15:36.427370 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <target type='serial' port='0'/>
	I0414 14:15:36.427397 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </console>
	I0414 14:15:36.427410 2246921 main.go:141] libmachine: (enable-default-cni-793608)     <rng model='virtio'>
	I0414 14:15:36.427421 2246921 main.go:141] libmachine: (enable-default-cni-793608)       <backend model='random'>/dev/random</backend>
	I0414 14:15:36.427432 2246921 main.go:141] libmachine: (enable-default-cni-793608)     </rng>
	I0414 14:15:36.427442 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427451 2246921 main.go:141] libmachine: (enable-default-cni-793608)     
	I0414 14:15:36.427464 2246921 main.go:141] libmachine: (enable-default-cni-793608)   </devices>
	I0414 14:15:36.427475 2246921 main.go:141] libmachine: (enable-default-cni-793608) </domain>
	I0414 14:15:36.427488 2246921 main.go:141] libmachine: (enable-default-cni-793608) 
	I0414 14:15:36.431881 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:85:82:bc in network default
	I0414 14:15:36.432649 2246921 main.go:141] libmachine: (enable-default-cni-793608) starting domain...
	I0414 14:15:36.432690 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:36.432705 2246921 main.go:141] libmachine: (enable-default-cni-793608) ensuring networks are active...
	I0414 14:15:36.433501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network default is active
	I0414 14:15:36.433815 2246921 main.go:141] libmachine: (enable-default-cni-793608) Ensuring network mk-enable-default-cni-793608 is active
	I0414 14:15:36.434345 2246921 main.go:141] libmachine: (enable-default-cni-793608) getting domain XML...
	I0414 14:15:36.435023 2246921 main.go:141] libmachine: (enable-default-cni-793608) creating domain...
	I0414 14:15:34.721833 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:15:34.721950 2245195 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:15:34.721968 2245195 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:15:34.721980 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722264 2245195 buildroot.go:166] provisioning hostname "flannel-793608"
	I0414 14:15:34.722299 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.722517 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.725190 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725590 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.725618 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.725786 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.725976 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726158 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.726304 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.726456 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.726666 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.726685 2245195 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-793608 && echo "flannel-793608" | sudo tee /etc/hostname
	I0414 14:15:34.856671 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-793608
	
	I0414 14:15:34.856706 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.859492 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.859878 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.859918 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.860081 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:34.860306 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860473 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:34.860626 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:34.860812 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:34.861092 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:34.861118 2245195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:15:34.981989 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:15:34.982020 2245195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:15:34.982064 2245195 buildroot.go:174] setting up certificates
	I0414 14:15:34.982083 2245195 provision.go:84] configureAuth start
	I0414 14:15:34.982100 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetMachineName
	I0414 14:15:34.982387 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:34.985287 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985634 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.985664 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.985812 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:34.987950 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988286 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:34.988317 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:34.988471 2245195 provision.go:143] copyHostCerts
	I0414 14:15:34.988524 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:15:34.988534 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:15:34.988599 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:15:34.988693 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:15:34.988701 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:15:34.988724 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:15:34.988819 2245195 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:15:34.988834 2245195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:15:34.988863 2245195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:15:34.988910 2245195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.flannel-793608 san=[127.0.0.1 192.168.72.179 flannel-793608 localhost minikube]
	I0414 14:15:35.242680 2245195 provision.go:177] copyRemoteCerts
	I0414 14:15:35.242795 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:15:35.242845 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.246504 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.246882 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.246915 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.247123 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.247346 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.247546 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.247691 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.335122 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:15:35.359746 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 14:15:35.383458 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:15:35.406827 2245195 provision.go:87] duration metric: took 424.726599ms to configureAuth
	I0414 14:15:35.406858 2245195 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:15:35.407035 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:35.407113 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.409975 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410322 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.410352 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.410487 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.410685 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410854 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.410996 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.411145 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.411363 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.411378 2245195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:15:35.634723 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:15:35.634754 2245195 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:15:35.634762 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetURL
	I0414 14:15:35.636108 2245195 main.go:141] libmachine: (flannel-793608) DBG | using libvirt version 6000000
	I0414 14:15:35.638402 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638738 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.638770 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.638978 2245195 main.go:141] libmachine: Docker is up and running!
	I0414 14:15:35.638990 2245195 main.go:141] libmachine: Reticulating splines...
	I0414 14:15:35.638999 2245195 client.go:171] duration metric: took 25.896323518s to LocalClient.Create
	I0414 14:15:35.639031 2245195 start.go:167] duration metric: took 25.896405712s to libmachine.API.Create "flannel-793608"
	I0414 14:15:35.639044 2245195 start.go:293] postStartSetup for "flannel-793608" (driver="kvm2")
	I0414 14:15:35.639058 2245195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:15:35.639082 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.639326 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:15:35.639354 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.641386 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641767 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.641796 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.641940 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.642082 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.642270 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.642382 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.727765 2245195 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:15:35.732019 2245195 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:15:35.732052 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:15:35.732122 2245195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:15:35.732246 2245195 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:15:35.732379 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:15:35.742061 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:35.766562 2245195 start.go:296] duration metric: took 127.496422ms for postStartSetup
	I0414 14:15:35.766624 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetConfigRaw
	I0414 14:15:35.767287 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.770180 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770527 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.770556 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.770795 2245195 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/config.json ...
	I0414 14:15:35.771009 2245195 start.go:128] duration metric: took 26.050328808s to createHost
	I0414 14:15:35.771033 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.773350 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773680 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.773709 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.773847 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.774059 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774197 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.774332 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.774490 2245195 main.go:141] libmachine: Using SSH client type: native
	I0414 14:15:35.774772 2245195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0414 14:15:35.774784 2245195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:15:35.885598 2245195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640135.860958804
	
	I0414 14:15:35.885628 2245195 fix.go:216] guest clock: 1744640135.860958804
	I0414 14:15:35.885639 2245195 fix.go:229] Guest: 2025-04-14 14:15:35.860958804 +0000 UTC Remote: 2025-04-14 14:15:35.771023131 +0000 UTC m=+26.173579221 (delta=89.935673ms)
	I0414 14:15:35.885673 2245195 fix.go:200] guest clock delta is within tolerance: 89.935673ms
	I0414 14:15:35.885683 2245195 start.go:83] releasing machines lock for "flannel-793608", held for 26.165125753s
	I0414 14:15:35.885713 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.886039 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:35.889061 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889425 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.889466 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.889637 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890211 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890425 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:35.890536 2245195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:15:35.890579 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.890691 2245195 ssh_runner.go:195] Run: cat /version.json
	I0414 14:15:35.890721 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:35.893586 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893869 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.893934 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.893981 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894247 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894384 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:35.894411 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:35.894457 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894574 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:35.894629 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.894739 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:35.894808 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.894924 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:35.895057 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:35.982011 2245195 ssh_runner.go:195] Run: systemctl --version
	I0414 14:15:36.008338 2245195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:15:36.168391 2245195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:15:36.174476 2245195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:15:36.174551 2245195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:15:36.191051 2245195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:15:36.191080 2245195 start.go:495] detecting cgroup driver to use...
	I0414 14:15:36.191168 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:15:36.209096 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:15:36.223881 2245195 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:15:36.223954 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:15:36.239607 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:15:36.254647 2245195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:15:36.382628 2245195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:15:36.567479 2245195 docker.go:233] disabling docker service ...
	I0414 14:15:36.567573 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:15:36.583824 2245195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:15:36.597712 2245195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:15:36.773681 2245195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:15:36.916917 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:15:36.935946 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:15:36.958970 2245195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:15:36.959024 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.972811 2245195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:15:36.972871 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:36.988108 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.003343 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.018161 2245195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:15:37.030406 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.043236 2245195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.064170 2245195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:15:37.080502 2245195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:15:37.094496 2245195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:15:37.094554 2245195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:15:37.109299 2245195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:15:37.120177 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:37.270593 2245195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:15:37.363308 2245195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:15:37.363395 2245195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:15:37.368889 2245195 start.go:563] Will wait 60s for crictl version
	I0414 14:15:37.368989 2245195 ssh_runner.go:195] Run: which crictl
	I0414 14:15:37.373260 2245195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:15:37.419353 2245195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:15:37.419459 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.452713 2245195 ssh_runner.go:195] Run: crio --version
	I0414 14:15:37.488597 2245195 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:37.489796 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetIP
	I0414 14:15:37.493160 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.493715 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:37.493740 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:37.494018 2245195 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 14:15:37.499012 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:37.512911 2245195 kubeadm.go:883] updating cluster {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:15:37.513053 2245195 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:15:37.513119 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:37.548903 2245195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:15:37.548981 2245195 ssh_runner.go:195] Run: which lz4
	I0414 14:15:37.553268 2245195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:15:37.557856 2245195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:15:37.557890 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:15:39.123096 2245195 crio.go:462] duration metric: took 1.569856354s to copy over tarball
	I0414 14:15:39.123200 2245195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:15:37.970496 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for IP...
	I0414 14:15:37.971657 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:37.972252 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:37.972347 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:37.972267 2246989 retry.go:31] will retry after 263.370551ms: waiting for domain to come up
	I0414 14:15:38.238079 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.238915 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.238941 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.238830 2246989 retry.go:31] will retry after 385.607481ms: waiting for domain to come up
	I0414 14:15:38.626321 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:38.627021 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:38.627050 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:38.626998 2246989 retry.go:31] will retry after 445.201612ms: waiting for domain to come up
	I0414 14:15:39.073922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.074637 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.074669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.074614 2246989 retry.go:31] will retry after 401.280526ms: waiting for domain to come up
	I0414 14:15:39.477622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:39.478402 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:39.478431 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:39.478359 2246989 retry.go:31] will retry after 525.224065ms: waiting for domain to come up
	I0414 14:15:40.005081 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.005652 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.005679 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.005612 2246989 retry.go:31] will retry after 886.00622ms: waiting for domain to come up
	I0414 14:15:40.893950 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:40.894495 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:40.894532 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:40.894465 2246989 retry.go:31] will retry after 854.182582ms: waiting for domain to come up
	I0414 14:15:41.493709 2245195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.370463717s)
	I0414 14:15:41.493748 2245195 crio.go:469] duration metric: took 2.370608674s to extract the tarball
	I0414 14:15:41.493759 2245195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:15:41.535292 2245195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:15:41.588898 2245195 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:15:41.588940 2245195 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:15:41.588949 2245195 kubeadm.go:934] updating node { 192.168.72.179 8443 v1.32.2 crio true true} ...
	I0414 14:15:41.589074 2245195 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 14:15:41.589140 2245195 ssh_runner.go:195] Run: crio config
	I0414 14:15:41.654490 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:41.654526 2245195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:15:41.654559 2245195 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.179 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-793608 NodeName:flannel-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:15:41.654767 2245195 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:15:41.654853 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:15:41.665504 2245195 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:15:41.665589 2245195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:15:41.675974 2245195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 14:15:41.694468 2245195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:15:41.712581 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 14:15:41.731194 2245195 ssh_runner.go:195] Run: grep 192.168.72.179	control-plane.minikube.internal$ /etc/hosts
	I0414 14:15:41.735372 2245195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:15:41.748968 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:41.865867 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:41.886997 2245195 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608 for IP: 192.168.72.179
	I0414 14:15:41.887023 2245195 certs.go:194] generating shared ca certs ...
	I0414 14:15:41.887041 2245195 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:41.887257 2245195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:15:41.887344 2245195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:15:41.887359 2245195 certs.go:256] generating profile certs ...
	I0414 14:15:41.887451 2245195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key
	I0414 14:15:41.887472 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt with IP's: []
	I0414 14:15:42.047090 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt ...
	I0414 14:15:42.047130 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.crt: {Name:mk61725d6c2d598935bcc4ddc3016fd5f2c41ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047361 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key ...
	I0414 14:15:42.047378 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/client.key: {Name:mk34fa3cf8ab863f5f74888d1351e7b4a1a82440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.047497 2245195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052
	I0414 14:15:42.047517 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.179]
	I0414 14:15:42.148599 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 ...
	I0414 14:15:42.148638 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052: {Name:mk1db924027905394f8766631f4c71ead06a8ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.148885 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 ...
	I0414 14:15:42.148907 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052: {Name:mkbaf3ac23585ef0764dcb14eee50a6ebe5b28d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.149024 2245195 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt
	I0414 14:15:42.149140 2245195 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key.d08a0052 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key
	I0414 14:15:42.149237 2245195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key
	I0414 14:15:42.149261 2245195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt with IP's: []
	I0414 14:15:42.494187 2245195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt ...
	I0414 14:15:42.494227 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt: {Name:mk0e79a8197af3196f139854e3ee11b8a9027e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494439 2245195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key ...
	I0414 14:15:42.494459 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key: {Name:mk9593886f9fd4b010d5b9a09f833fed6848aae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:42.494757 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:15:42.494818 2245195 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:15:42.494832 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:15:42.494857 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:15:42.494883 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:15:42.494912 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:15:42.494953 2245195 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:15:42.495564 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:15:42.528844 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:15:42.560780 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:15:42.606054 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:15:42.646740 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 14:15:42.680301 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:15:42.711568 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:15:42.740840 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/flannel-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:15:42.771555 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:15:42.807236 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:15:42.835699 2245195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:15:42.863445 2245195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:15:42.883659 2245195 ssh_runner.go:195] Run: openssl version
	I0414 14:15:42.890583 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:15:42.901664 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906367 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.906428 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:15:42.912610 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:15:42.923894 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:15:42.935385 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940238 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.940307 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:15:42.946322 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:15:42.960753 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:15:42.973724 2245195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979243 2245195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.979299 2245195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:15:42.985427 2245195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:15:42.996662 2245195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:15:43.001220 2245195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:15:43.001300 2245195 kubeadm.go:392] StartCluster: {Name:flannel-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-793608 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:15:43.001402 2245195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:15:43.001459 2245195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:15:43.038601 2245195 cri.go:89] found id: ""
	I0414 14:15:43.038700 2245195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:15:43.049342 2245195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:15:43.059616 2245195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:15:43.070826 2245195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:15:43.070850 2245195 kubeadm.go:157] found existing configuration files:
	
	I0414 14:15:43.070910 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:15:43.081463 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:15:43.081530 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:15:43.091483 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:15:43.103049 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:15:43.103137 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:15:43.113237 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.124160 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:15:43.124230 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:15:43.138965 2245195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:15:43.153232 2245195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:15:43.153306 2245195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:15:43.167864 2245195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:15:43.400744 2245195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:15:41.750230 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:41.750751 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:41.750807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:41.750744 2246989 retry.go:31] will retry after 1.224694163s: waiting for domain to come up
	I0414 14:15:42.976809 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:42.977336 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:42.977384 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:42.977328 2246989 retry.go:31] will retry after 1.264920996s: waiting for domain to come up
	I0414 14:15:44.243549 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:44.244159 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:44.244193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:44.244066 2246989 retry.go:31] will retry after 1.517311486s: waiting for domain to come up
	I0414 14:15:45.763600 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:45.764116 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:45.764135 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:45.764091 2246989 retry.go:31] will retry after 1.746471018s: waiting for domain to come up
	I0414 14:15:44.130732 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:15:44.130993 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:15:47.511868 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:47.512619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:47.512650 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:47.512522 2246989 retry.go:31] will retry after 3.501788139s: waiting for domain to come up
	I0414 14:15:51.016231 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:51.016805 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:51.016837 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:51.016759 2246989 retry.go:31] will retry after 3.940965891s: waiting for domain to come up
	I0414 14:15:54.321686 2245195 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:15:54.321774 2245195 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:15:54.321884 2245195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:15:54.322091 2245195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:15:54.322219 2245195 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:15:54.322316 2245195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:15:54.323900 2245195 out.go:235]   - Generating certificates and keys ...
	I0414 14:15:54.323989 2245195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:15:54.324068 2245195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:15:54.324163 2245195 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:15:54.324244 2245195 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:15:54.324357 2245195 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:15:54.324444 2245195 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:15:54.324558 2245195 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:15:54.324765 2245195 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.324837 2245195 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:15:54.325003 2245195 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-793608 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0414 14:15:54.325062 2245195 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:15:54.325116 2245195 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:15:54.325157 2245195 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:15:54.325240 2245195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:15:54.325297 2245195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:15:54.325361 2245195 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:15:54.325410 2245195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:15:54.325469 2245195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:15:54.325533 2245195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:15:54.325622 2245195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:15:54.325680 2245195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:15:54.326976 2245195 out.go:235]   - Booting up control plane ...
	I0414 14:15:54.327061 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:15:54.327129 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:15:54.327223 2245195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:15:54.327393 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:15:54.327473 2245195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:15:54.327543 2245195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:15:54.327735 2245195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:15:54.327895 2245195 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:15:54.327988 2245195 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78321ms
	I0414 14:15:54.328108 2245195 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:15:54.328217 2245195 kubeadm.go:310] [api-check] The API server is healthy after 5.502171207s
	I0414 14:15:54.328371 2245195 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:15:54.328532 2245195 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:15:54.328601 2245195 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:15:54.328798 2245195 kubeadm.go:310] [mark-control-plane] Marking the node flannel-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:15:54.328865 2245195 kubeadm.go:310] [bootstrap-token] Using token: zu89f8.zeaf2f1xfahm8xki
	I0414 14:15:54.330659 2245195 out.go:235]   - Configuring RBAC rules ...
	I0414 14:15:54.330777 2245195 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:15:54.330853 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:15:54.330999 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:15:54.331151 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:15:54.331343 2245195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:15:54.331475 2245195 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:15:54.331629 2245195 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:15:54.331710 2245195 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:15:54.331776 2245195 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:15:54.331786 2245195 kubeadm.go:310] 
	I0414 14:15:54.331859 2245195 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:15:54.331868 2245195 kubeadm.go:310] 
	I0414 14:15:54.331988 2245195 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:15:54.331996 2245195 kubeadm.go:310] 
	I0414 14:15:54.332023 2245195 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:15:54.332081 2245195 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:15:54.332156 2245195 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:15:54.332174 2245195 kubeadm.go:310] 
	I0414 14:15:54.332254 2245195 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:15:54.332264 2245195 kubeadm.go:310] 
	I0414 14:15:54.332330 2245195 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:15:54.332345 2245195 kubeadm.go:310] 
	I0414 14:15:54.332421 2245195 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:15:54.332536 2245195 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:15:54.332628 2245195 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:15:54.332638 2245195 kubeadm.go:310] 
	I0414 14:15:54.332771 2245195 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:15:54.332848 2245195 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:15:54.332854 2245195 kubeadm.go:310] 
	I0414 14:15:54.332922 2245195 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333010 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:15:54.333034 2245195 kubeadm.go:310] 	--control-plane 
	I0414 14:15:54.333039 2245195 kubeadm.go:310] 
	I0414 14:15:54.333109 2245195 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:15:54.333115 2245195 kubeadm.go:310] 
	I0414 14:15:54.333216 2245195 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zu89f8.zeaf2f1xfahm8xki \
	I0414 14:15:54.333391 2245195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:15:54.333407 2245195 cni.go:84] Creating CNI manager for "flannel"
	I0414 14:15:54.334755 2245195 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 14:15:54.335890 2245195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:15:54.344160 2245195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:15:54.344176 2245195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 14:15:54.374891 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:15:54.962412 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:15:54.963168 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find current IP address of domain enable-default-cni-793608 in network mk-enable-default-cni-793608
	I0414 14:15:54.963191 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | I0414 14:15:54.963134 2246989 retry.go:31] will retry after 5.168467899s: waiting for domain to come up
	I0414 14:15:54.872301 2245195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:15:54.872398 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:54.872433 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-793608 minikube.k8s.io/updated_at=2025_04_14T14_15_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=flannel-793608 minikube.k8s.io/primary=true
	I0414 14:15:54.889203 2245195 ops.go:34] apiserver oom_adj: -16
	I0414 14:15:55.015715 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:55.515973 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.016052 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:56.515895 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.015870 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:57.516553 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.016409 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.516652 2245195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:15:58.638498 2245195 kubeadm.go:1113] duration metric: took 3.766167061s to wait for elevateKubeSystemPrivileges
	I0414 14:15:58.638542 2245195 kubeadm.go:394] duration metric: took 15.637248519s to StartCluster
	I0414 14:15:58.638569 2245195 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.638677 2245195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:15:58.640030 2245195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:15:58.640295 2245195 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:15:58.640313 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:15:58.640376 2245195 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:15:58.640504 2245195 addons.go:69] Setting storage-provisioner=true in profile "flannel-793608"
	I0414 14:15:58.640526 2245195 addons.go:69] Setting default-storageclass=true in profile "flannel-793608"
	I0414 14:15:58.640547 2245195 addons.go:238] Setting addon storage-provisioner=true in "flannel-793608"
	I0414 14:15:58.640550 2245195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-793608"
	I0414 14:15:58.640593 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.640513 2245195 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:15:58.641023 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641041 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.641052 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.641080 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.642641 2245195 out.go:177] * Verifying Kubernetes components...
	I0414 14:15:58.644038 2245195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:15:58.657672 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I0414 14:15:58.657684 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0414 14:15:58.658211 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658255 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.658709 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.658724 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.658741 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.659096 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659109 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.659278 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.659593 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.659622 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.662886 2245195 addons.go:238] Setting addon default-storageclass=true in "flannel-793608"
	I0414 14:15:58.662943 2245195 host.go:66] Checking if "flannel-793608" exists ...
	I0414 14:15:58.663326 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.663378 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.676384 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0414 14:15:58.677014 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.677627 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.677663 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.678164 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.678390 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.680209 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0414 14:15:58.680777 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.680982 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.681468 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.681494 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.681912 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.682367 2245195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:15:58.682406 2245195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:15:58.682479 2245195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:15:58.683790 2245195 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:58.683805 2245195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:15:58.683823 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.687182 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.687747 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.687772 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.688014 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.688156 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.688286 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.688424 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.704623 2245195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0414 14:15:58.705030 2245195 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:15:58.705522 2245195 main.go:141] libmachine: Using API Version  1
	I0414 14:15:58.705545 2245195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:15:58.705873 2245195 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:15:58.706088 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetState
	I0414 14:15:58.707899 2245195 main.go:141] libmachine: (flannel-793608) Calling .DriverName
	I0414 14:15:58.708169 2245195 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:58.708185 2245195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:15:58.708207 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHHostname
	I0414 14:15:58.711345 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.711798 2245195 main.go:141] libmachine: (flannel-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9d:72", ip: ""} in network mk-flannel-793608: {Iface:virbr3 ExpiryTime:2025-04-14 15:15:26 +0000 UTC Type:0 Mac:52:54:00:62:9d:72 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:flannel-793608 Clientid:01:52:54:00:62:9d:72}
	I0414 14:15:58.711837 2245195 main.go:141] libmachine: (flannel-793608) DBG | domain flannel-793608 has defined IP address 192.168.72.179 and MAC address 52:54:00:62:9d:72 in network mk-flannel-793608
	I0414 14:15:58.712036 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHPort
	I0414 14:15:58.712219 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHKeyPath
	I0414 14:15:58.712341 2245195 main.go:141] libmachine: (flannel-793608) Calling .GetSSHUsername
	I0414 14:15:58.712475 2245195 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/flannel-793608/id_rsa Username:docker}
	I0414 14:15:58.899648 2245195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:15:58.899700 2245195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:15:59.086139 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:15:59.182264 2245195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:15:59.471260 2245195 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 14:15:59.471370 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.471390 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472005 2245195 node_ready.go:35] waiting up to 15m0s for node "flannel-793608" to be "Ready" ...
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.472484 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472510 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472520 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.472529 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.472837 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.472856 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.509367 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.509402 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.509711 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.509732 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.509736 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.829452 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829478 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829880 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.829909 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.829920 2245195 main.go:141] libmachine: Making call to close driver server
	I0414 14:15:59.829931 2245195 main.go:141] libmachine: (flannel-793608) Calling .Close
	I0414 14:15:59.829969 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831466 2245195 main.go:141] libmachine: (flannel-793608) DBG | Closing plugin on server side
	I0414 14:15:59.831574 2245195 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:15:59.831592 2245195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:15:59.832957 2245195 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 14:16:00.135640 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136179 2246921 main.go:141] libmachine: (enable-default-cni-793608) found domain IP: 192.168.61.51
	I0414 14:16:00.136218 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has current primary IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.136228 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserving static IP address...
	I0414 14:16:00.136619 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-793608", mac: "52:54:00:17:5c:90", ip: "192.168.61.51"} in network mk-enable-default-cni-793608
	I0414 14:16:00.222763 2246921 main.go:141] libmachine: (enable-default-cni-793608) reserved static IP address 192.168.61.51 for domain enable-default-cni-793608
	I0414 14:16:00.222799 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Getting to WaitForSSH function...
	I0414 14:16:00.222807 2246921 main.go:141] libmachine: (enable-default-cni-793608) waiting for SSH...
	I0414 14:16:00.225129 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225617 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.225648 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.225770 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH client type: external
	I0414 14:16:00.225797 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Using SSH private key: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa (-rw-------)
	I0414 14:16:00.225856 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:16:00.225876 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | About to run SSH command:
	I0414 14:16:00.225885 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | exit 0
	I0414 14:16:00.349424 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | SSH cmd err, output: <nil>: 
	I0414 14:16:00.349710 2246921 main.go:141] libmachine: (enable-default-cni-793608) KVM machine creation complete
	I0414 14:16:00.350094 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:00.350758 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.350973 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:00.351171 2246921 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:16:00.351186 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:00.352474 2246921 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:16:00.352489 2246921 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:16:00.352495 2246921 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:16:00.352501 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.354605 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355001 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.355029 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.355171 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.355341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355496 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.355665 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.355853 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.356079 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.356090 2246921 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:16:00.456380 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.456429 2246921 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:16:00.456438 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.460571 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461142 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.461175 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.461350 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.461649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461843 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.461993 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.462152 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.462352 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.462363 2246921 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:16:00.565817 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:16:00.565933 2246921 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:16:00.565955 2246921 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:16:00.565967 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566215 2246921 buildroot.go:166] provisioning hostname "enable-default-cni-793608"
	I0414 14:16:00.566248 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.566475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.569565 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570007 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.570036 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.570148 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.570313 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.570649 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.570830 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.571038 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.571050 2246921 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-793608 && echo "enable-default-cni-793608" | sudo tee /etc/hostname
	I0414 14:16:00.692563 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-793608
	
	I0414 14:16:00.692608 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.695656 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.695992 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.696018 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.696190 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.696382 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696512 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.696618 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.696827 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:00.697070 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:00.697097 2246921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-793608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-793608/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-793608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:16:00.806026 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:16:00.806057 2246921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20623-2183077/.minikube CaCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20623-2183077/.minikube}
	I0414 14:16:00.806076 2246921 buildroot.go:174] setting up certificates
	I0414 14:16:00.806087 2246921 provision.go:84] configureAuth start
	I0414 14:16:00.806096 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetMachineName
	I0414 14:16:00.806436 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:00.809322 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.809771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.809895 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.812367 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812741 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.812771 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.812939 2246921 provision.go:143] copyHostCerts
	I0414 14:16:00.812997 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem, removing ...
	I0414 14:16:00.813016 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem
	I0414 14:16:00.813075 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.pem (1078 bytes)
	I0414 14:16:00.813177 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem, removing ...
	I0414 14:16:00.813185 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem
	I0414 14:16:00.813204 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/cert.pem (1123 bytes)
	I0414 14:16:00.813273 2246921 exec_runner.go:144] found /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem, removing ...
	I0414 14:16:00.813281 2246921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem
	I0414 14:16:00.813298 2246921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20623-2183077/.minikube/key.pem (1675 bytes)
	I0414 14:16:00.813356 2246921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-793608 san=[127.0.0.1 192.168.61.51 enable-default-cni-793608 localhost minikube]
	I0414 14:16:00.907159 2246921 provision.go:177] copyRemoteCerts
	I0414 14:16:00.907230 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:16:00.907255 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:00.909912 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910303 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:00.910362 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:00.910514 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:00.910722 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:00.910890 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:00.911056 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:00.991599 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:16:01.015103 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 14:16:01.038414 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:16:01.061551 2246921 provision.go:87] duration metric: took 255.446538ms to configureAuth
	I0414 14:16:01.061589 2246921 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:16:01.061847 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:01.061953 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.064789 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065216 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.065256 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.065409 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.065624 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065779 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.065922 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.066067 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.066371 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.066394 2246921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:16:01.298967 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:16:01.299001 2246921 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:16:01.299010 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetURL
	I0414 14:16:01.300270 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | using libvirt version 6000000
	I0414 14:16:01.302669 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303154 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.303193 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.303319 2246921 main.go:141] libmachine: Docker is up and running!
	I0414 14:16:01.303335 2246921 main.go:141] libmachine: Reticulating splines...
	I0414 14:16:01.303344 2246921 client.go:171] duration metric: took 25.3946292s to LocalClient.Create
	I0414 14:16:01.303368 2246921 start.go:167] duration metric: took 25.394704554s to libmachine.API.Create "enable-default-cni-793608"
	I0414 14:16:01.303379 2246921 start.go:293] postStartSetup for "enable-default-cni-793608" (driver="kvm2")
	I0414 14:16:01.303391 2246921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:16:01.303418 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.303684 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:16:01.303712 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.305963 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306296 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.306333 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.306447 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.306611 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.306757 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.306883 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.391656 2246921 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:16:01.396053 2246921 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:16:01.396081 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/addons for local assets ...
	I0414 14:16:01.396141 2246921 filesync.go:126] Scanning /home/jenkins/minikube-integration/20623-2183077/.minikube/files for local assets ...
	I0414 14:16:01.396212 2246921 filesync.go:149] local asset: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem -> 21904002.pem in /etc/ssl/certs
	I0414 14:16:01.396298 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:16:01.406179 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:01.431109 2246921 start.go:296] duration metric: took 127.714931ms for postStartSetup
	I0414 14:16:01.431163 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetConfigRaw
	I0414 14:16:01.431902 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.434569 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.434922 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.434956 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.435258 2246921 profile.go:143] Saving config to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/config.json ...
	I0414 14:16:01.435452 2246921 start.go:128] duration metric: took 25.549370799s to createHost
	I0414 14:16:01.435475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.437807 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438150 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.438170 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.438300 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.438475 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438685 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.438882 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.439043 2246921 main.go:141] libmachine: Using SSH client type: native
	I0414 14:16:01.439232 2246921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0414 14:16:01.439248 2246921 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:16:01.543192 2246921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640161.512131339
	
	I0414 14:16:01.543222 2246921 fix.go:216] guest clock: 1744640161.512131339
	I0414 14:16:01.543232 2246921 fix.go:229] Guest: 2025-04-14 14:16:01.512131339 +0000 UTC Remote: 2025-04-14 14:16:01.435464689 +0000 UTC m=+29.759982396 (delta=76.66665ms)
	I0414 14:16:01.543257 2246921 fix.go:200] guest clock delta is within tolerance: 76.66665ms
	I0414 14:16:01.543264 2246921 start.go:83] releasing machines lock for "enable-default-cni-793608", held for 25.657434721s
	I0414 14:16:01.543289 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.543595 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:01.546776 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547177 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.547209 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.547370 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.547937 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548127 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:01.548243 2246921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:16:01.548294 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.548390 2246921 ssh_runner.go:195] Run: cat /version.json
	I0414 14:16:01.548429 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:01.551187 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551441 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551622 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551651 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.551769 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.551902 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:01.551943 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:01.552007 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552128 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:01.552233 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552341 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:01.552436 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.552518 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:01.552664 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:01.626735 2246921 ssh_runner.go:195] Run: systemctl --version
	I0414 14:16:01.656365 2246921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:16:01.812225 2246921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:16:01.819633 2246921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:16:01.819716 2246921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:16:01.841839 2246921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:16:01.841866 2246921 start.go:495] detecting cgroup driver to use...
	I0414 14:16:01.841952 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:16:01.857973 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:16:01.876392 2246921 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:16:01.876465 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:16:01.890055 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:16:01.903801 2246921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:16:02.017060 2246921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:16:02.157678 2246921 docker.go:233] disabling docker service ...
	I0414 14:16:02.157771 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:16:02.172664 2246921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:16:02.187082 2246921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:16:02.331112 2246921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:16:02.472406 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:16:02.489418 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:16:02.510696 2246921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:16:02.510773 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.523647 2246921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:16:02.523745 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.535466 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.546736 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.559297 2246921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:16:02.571500 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.583906 2246921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.602844 2246921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:16:02.615974 2246921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:16:02.628273 2246921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:16:02.628364 2246921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:16:02.643490 2246921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:16:02.654314 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:02.785718 2246921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:16:02.885394 2246921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:16:02.885481 2246921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:16:02.890584 2246921 start.go:563] Will wait 60s for crictl version
	I0414 14:16:02.890644 2246921 ssh_runner.go:195] Run: which crictl
	I0414 14:16:02.894771 2246921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:16:02.944686 2246921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:16:02.944817 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:02.977319 2246921 ssh_runner.go:195] Run: crio --version
	I0414 14:16:03.011026 2246921 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:15:59.833954 2245195 addons.go:514] duration metric: took 1.193578801s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 14:15:59.976226 2245195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-793608" context rescaled to 1 replicas
	I0414 14:16:01.475533 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.476866 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:03.011997 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetIP
	I0414 14:16:03.014857 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015311 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:03.015340 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:03.015594 2246921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 14:16:03.020865 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:03.036489 2246921 kubeadm.go:883] updating cluster {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enab
le-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:16:03.036649 2246921 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:16:03.036718 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:03.074619 2246921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:16:03.074721 2246921 ssh_runner.go:195] Run: which lz4
	I0414 14:16:03.079439 2246921 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:16:03.084705 2246921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:16:03.084757 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:16:04.551650 2246921 crio.go:462] duration metric: took 1.472256374s to copy over tarball
	I0414 14:16:04.551756 2246921 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:16:05.975760 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:08.138018 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:06.821676 2246921 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269870769s)
	I0414 14:16:06.821713 2246921 crio.go:469] duration metric: took 2.270028033s to extract the tarball
	I0414 14:16:06.821725 2246921 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:16:06.862078 2246921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:16:06.905635 2246921 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:16:06.905661 2246921 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:16:06.905669 2246921 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.32.2 crio true true} ...
	I0414 14:16:06.905814 2246921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-793608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:enable-default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 14:16:06.905913 2246921 ssh_runner.go:195] Run: crio config
	I0414 14:16:06.967144 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:06.967177 2246921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:16:06.967207 2246921 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-793608 NodeName:enable-default-cni-793608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:16:06.967367 2246921 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-793608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:16:06.967440 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:16:06.979475 2246921 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:16:06.979549 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:16:06.989632 2246921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0414 14:16:07.006974 2246921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:16:07.022847 2246921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0414 14:16:07.039334 2246921 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0414 14:16:07.044243 2246921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:16:07.057149 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:07.178687 2246921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:16:07.197629 2246921 certs.go:68] Setting up /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608 for IP: 192.168.61.51
	I0414 14:16:07.197660 2246921 certs.go:194] generating shared ca certs ...
	I0414 14:16:07.197685 2246921 certs.go:226] acquiring lock for ca certs: {Name:mkd994da28098ae08a84efba20f096b52fe71222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.197885 2246921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key
	I0414 14:16:07.197942 2246921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key
	I0414 14:16:07.197956 2246921 certs.go:256] generating profile certs ...
	I0414 14:16:07.198029 2246921 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key
	I0414 14:16:07.198048 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt with IP's: []
	I0414 14:16:07.570874 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt ...
	I0414 14:16:07.570904 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.crt: {Name:mk64c63d6e720c22aec573b6c12aa4a432b22501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571092 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key ...
	I0414 14:16:07.571109 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/client.key: {Name:mk0c2d9a7feb9ede0f0a997f4aa74d9da8bd11d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.571225 2246921 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3
	I0414 14:16:07.571249 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.51]
	I0414 14:16:07.814982 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 ...
	I0414 14:16:07.815014 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3: {Name:mkeadb0ce7226e84070b03ee54954b097e65052a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815181 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 ...
	I0414 14:16:07.815199 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3: {Name:mk35e329e7bcce4cbc7bc648e6d4baaf541bedca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:07.815273 2246921 certs.go:381] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt
	I0414 14:16:07.815343 2246921 certs.go:385] copying /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key.73eedca3 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key
	I0414 14:16:07.838493 2246921 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key
	I0414 14:16:07.838529 2246921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt with IP's: []
	I0414 14:16:08.294087 2246921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt ...
	I0414 14:16:08.294124 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt: {Name:mk366e930f55c71d9e0d1a041fc8658466e0adca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348261 2246921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key ...
	I0414 14:16:08.348306 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key: {Name:mk319b3ead18f415068eabdc65c4b137c462dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:08.348591 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem (1338 bytes)
	W0414 14:16:08.348644 2246921 certs.go:480] ignoring /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400_empty.pem, impossibly tiny 0 bytes
	I0414 14:16:08.348659 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:16:08.348693 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/ca.pem (1078 bytes)
	I0414 14:16:08.348724 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:16:08.348775 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/key.pem (1675 bytes)
	I0414 14:16:08.348827 2246921 certs.go:484] found cert: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem (1708 bytes)
	I0414 14:16:08.349593 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:16:08.435452 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:16:08.462331 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:16:08.492377 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:16:08.517463 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:16:08.584048 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:16:08.609810 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:16:08.634266 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/enable-default-cni-793608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:16:08.663003 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:16:08.688663 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/certs/2190400.pem --> /usr/share/ca-certificates/2190400.pem (1338 bytes)
	I0414 14:16:08.713403 2246921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/ssl/certs/21904002.pem --> /usr/share/ca-certificates/21904002.pem (1708 bytes)
	I0414 14:16:08.736962 2246921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:16:08.754353 2246921 ssh_runner.go:195] Run: openssl version
	I0414 14:16:08.760345 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21904002.pem && ln -fs /usr/share/ca-certificates/21904002.pem /etc/ssl/certs/21904002.pem"
	I0414 14:16:08.773588 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789050 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 13:02 /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.789138 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21904002.pem
	I0414 14:16:08.801556 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21904002.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:16:08.818825 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:16:08.835651 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841380 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.841444 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:16:08.847453 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:16:08.859009 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2190400.pem && ln -fs /usr/share/ca-certificates/2190400.pem /etc/ssl/certs/2190400.pem"
	I0414 14:16:08.871527 2246921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877272 2246921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 13:02 /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.877350 2246921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2190400.pem
	I0414 14:16:08.883496 2246921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2190400.pem /etc/ssl/certs/51391683.0"
	I0414 14:16:08.895900 2246921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:16:08.900786 2246921 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:16:08.900847 2246921 kubeadm.go:392] StartCluster: {Name:enable-default-cni-793608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:enable-
default-cni-793608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:16:08.900953 2246921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:16:08.901017 2246921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:16:08.943988 2246921 cri.go:89] found id: ""
	I0414 14:16:08.944083 2246921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:16:08.955727 2246921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:16:08.967585 2246921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:16:08.978749 2246921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:16:08.978778 2246921 kubeadm.go:157] found existing configuration files:
	
	I0414 14:16:08.978835 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:16:08.989765 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:16:08.989846 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:16:09.000464 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:16:09.011408 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:16:09.011475 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:16:09.022110 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.032105 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:16:09.032178 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:16:09.044673 2246921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:16:09.056844 2246921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:16:09.056918 2246921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:16:09.069647 2246921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:16:09.269121 2246921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:10.474671 2245195 node_ready.go:53] node "flannel-793608" has status "Ready":"False"
	I0414 14:16:10.979638 2245195 node_ready.go:49] node "flannel-793608" has status "Ready":"True"
	I0414 14:16:10.979667 2245195 node_ready.go:38] duration metric: took 11.50763178s for node "flannel-793608" to be "Ready" ...
	I0414 14:16:10.979680 2245195 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:10.994987 2245195 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:13.001584 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:15.501069 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:18.002171 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:19.808099 2246921 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:16:19.808186 2246921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:16:19.808295 2246921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:16:19.808429 2246921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:16:19.808568 2246921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:16:19.808676 2246921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:16:19.810130 2246921 out.go:235]   - Generating certificates and keys ...
	I0414 14:16:19.810238 2246921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:16:19.810298 2246921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:16:19.810365 2246921 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:16:19.810414 2246921 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:16:19.810470 2246921 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:16:19.810534 2246921 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:16:19.810597 2246921 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:16:19.810700 2246921 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810746 2246921 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:16:19.810861 2246921 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-793608 localhost] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0414 14:16:19.810922 2246921 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:16:19.810976 2246921 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:16:19.811019 2246921 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:16:19.811063 2246921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:16:19.811110 2246921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:16:19.811178 2246921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:16:19.811247 2246921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:16:19.811315 2246921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:16:19.811416 2246921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:16:19.811560 2246921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:16:19.811693 2246921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:16:19.813222 2246921 out.go:235]   - Booting up control plane ...
	I0414 14:16:19.813343 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:16:19.813423 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:16:19.813517 2246921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:16:19.813626 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:16:19.813707 2246921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:16:19.813744 2246921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:16:19.813927 2246921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:16:19.814039 2246921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:16:19.814093 2246921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.194914ms
	I0414 14:16:19.814160 2246921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:16:19.814211 2246921 kubeadm.go:310] [api-check] The API server is healthy after 5.003151438s
	I0414 14:16:19.814310 2246921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:16:19.814464 2246921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:16:19.814520 2246921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:16:19.814781 2246921 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-793608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:16:19.814844 2246921 kubeadm.go:310] [bootstrap-token] Using token: 3eizlo.lt0uyxdkcw3v7pf4
	I0414 14:16:19.816206 2246921 out.go:235]   - Configuring RBAC rules ...
	I0414 14:16:19.816316 2246921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:16:19.816416 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:16:19.816635 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:16:19.816797 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:16:19.816931 2246921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:16:19.817040 2246921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:16:19.817207 2246921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:16:19.817272 2246921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:16:19.817346 2246921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:16:19.817355 2246921 kubeadm.go:310] 
	I0414 14:16:19.817449 2246921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:16:19.817464 2246921 kubeadm.go:310] 
	I0414 14:16:19.817567 2246921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:16:19.817574 2246921 kubeadm.go:310] 
	I0414 14:16:19.817595 2246921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:16:19.817645 2246921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:16:19.817714 2246921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:16:19.817721 2246921 kubeadm.go:310] 
	I0414 14:16:19.817782 2246921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:16:19.817791 2246921 kubeadm.go:310] 
	I0414 14:16:19.817831 2246921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:16:19.817850 2246921 kubeadm.go:310] 
	I0414 14:16:19.817913 2246921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:16:19.818015 2246921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:16:19.818135 2246921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:16:19.818154 2246921 kubeadm.go:310] 
	I0414 14:16:19.818285 2246921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:16:19.818379 2246921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:16:19.818388 2246921 kubeadm.go:310] 
	I0414 14:16:19.818499 2246921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.818642 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c \
	I0414 14:16:19.818667 2246921 kubeadm.go:310] 	--control-plane 
	I0414 14:16:19.818671 2246921 kubeadm.go:310] 
	I0414 14:16:19.818846 2246921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:16:19.818859 2246921 kubeadm.go:310] 
	I0414 14:16:19.818924 2246921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3eizlo.lt0uyxdkcw3v7pf4 \
	I0414 14:16:19.819079 2246921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a5a7cfa3817d077a98fd35a9c88a0bda6880ef9130519c66d815ea92b980d7c 
	I0414 14:16:19.819111 2246921 cni.go:84] Creating CNI manager for "bridge"
	I0414 14:16:19.820701 2246921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 14:16:19.822064 2246921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:16:19.833700 2246921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:16:19.853878 2246921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:16:19.853933 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:19.853982 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-793608 minikube.k8s.io/updated_at=2025_04_14T14_16_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=460835bb8f21087bfa90e48a25f4afc66a903d88 minikube.k8s.io/name=enable-default-cni-793608 minikube.k8s.io/primary=true
	I0414 14:16:19.982063 2246921 ops.go:34] apiserver oom_adj: -16
	I0414 14:16:19.982081 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.483212 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:20.983097 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.482224 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:21.982202 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.483188 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:22.982274 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.483138 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:23.982281 2246921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:16:24.127347 2246921 kubeadm.go:1113] duration metric: took 4.273479771s to wait for elevateKubeSystemPrivileges
	I0414 14:16:24.127397 2246921 kubeadm.go:394] duration metric: took 15.226555734s to StartCluster
	I0414 14:16:24.127425 2246921 settings.go:142] acquiring lock: {Name:mk2be36efecc8d95b489214d6449055db55f6f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.127515 2246921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:16:24.128586 2246921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20623-2183077/kubeconfig: {Name:mka4d12cff403cd78c270c5ea752d21aa135c1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:16:24.128872 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:16:24.128877 2246921 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:16:24.128973 2246921 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:16:24.129079 2246921 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129102 2246921 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-793608"
	I0414 14:16:24.129137 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.129191 2246921 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:16:24.129134 2246921 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-793608"
	I0414 14:16:24.129295 2246921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-793608"
	I0414 14:16:24.129659 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129708 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.129784 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.129837 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.130608 2246921 out.go:177] * Verifying Kubernetes components...
	I0414 14:16:24.132086 2246921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:16:24.146823 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0414 14:16:24.147436 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.147995 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.148018 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.148365 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.148957 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.149005 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.150594 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0414 14:16:24.151027 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.151504 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.151528 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.151980 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.152177 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.156031 2246921 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-793608"
	I0414 14:16:24.156084 2246921 host.go:66] Checking if "enable-default-cni-793608" exists ...
	I0414 14:16:24.156451 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.156492 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.166981 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0414 14:16:24.167563 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.168160 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.168184 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.168575 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.168767 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.170740 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:24.172584 2246921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:16:20.501644 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:22.501757 2245195 pod_ready.go:103] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:24.130339 2235858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 14:16:24.130631 2235858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 14:16:24.130653 2235858 kubeadm.go:310] 
	I0414 14:16:24.130704 2235858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 14:16:24.130779 2235858 kubeadm.go:310] 		timed out waiting for the condition
	I0414 14:16:24.130797 2235858 kubeadm.go:310] 
	I0414 14:16:24.130844 2235858 kubeadm.go:310] 	This error is likely caused by:
	I0414 14:16:24.130904 2235858 kubeadm.go:310] 		- The kubelet is not running
	I0414 14:16:24.131056 2235858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 14:16:24.131075 2235858 kubeadm.go:310] 
	I0414 14:16:24.131212 2235858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 14:16:24.131254 2235858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 14:16:24.131293 2235858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 14:16:24.131299 2235858 kubeadm.go:310] 
	I0414 14:16:24.131421 2235858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 14:16:24.131520 2235858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 14:16:24.131528 2235858 kubeadm.go:310] 
	I0414 14:16:24.131660 2235858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 14:16:24.131767 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 14:16:24.131853 2235858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 14:16:24.131938 2235858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 14:16:24.131946 2235858 kubeadm.go:310] 
	I0414 14:16:24.133108 2235858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:16:24.133245 2235858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 14:16:24.133343 2235858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 14:16:24.133446 2235858 kubeadm.go:394] duration metric: took 8m0.052385423s to StartCluster
	I0414 14:16:24.133512 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 14:16:24.133587 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 14:16:24.199915 2235858 cri.go:89] found id: ""
	I0414 14:16:24.199946 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.199956 2235858 logs.go:284] No container was found matching "kube-apiserver"
	I0414 14:16:24.199965 2235858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 14:16:24.200032 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 14:16:24.247368 2235858 cri.go:89] found id: ""
	I0414 14:16:24.247407 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.247418 2235858 logs.go:284] No container was found matching "etcd"
	I0414 14:16:24.247427 2235858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 14:16:24.247496 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 14:16:24.288565 2235858 cri.go:89] found id: ""
	I0414 14:16:24.288598 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.288610 2235858 logs.go:284] No container was found matching "coredns"
	I0414 14:16:24.288618 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 14:16:24.288687 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 14:16:24.329531 2235858 cri.go:89] found id: ""
	I0414 14:16:24.329568 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.329581 2235858 logs.go:284] No container was found matching "kube-scheduler"
	I0414 14:16:24.329591 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 14:16:24.329663 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 14:16:24.372326 2235858 cri.go:89] found id: ""
	I0414 14:16:24.372361 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.372370 2235858 logs.go:284] No container was found matching "kube-proxy"
	I0414 14:16:24.372376 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 14:16:24.372447 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 14:16:24.423414 2235858 cri.go:89] found id: ""
	I0414 14:16:24.423447 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.423460 2235858 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 14:16:24.423469 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 14:16:24.423534 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 14:16:24.464828 2235858 cri.go:89] found id: ""
	I0414 14:16:24.464869 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.464882 2235858 logs.go:284] No container was found matching "kindnet"
	I0414 14:16:24.464890 2235858 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 14:16:24.464970 2235858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 14:16:24.505791 2235858 cri.go:89] found id: ""
	I0414 14:16:24.505820 2235858 logs.go:282] 0 containers: []
	W0414 14:16:24.505830 2235858 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 14:16:24.505844 2235858 logs.go:123] Gathering logs for kubelet ...
	I0414 14:16:24.505860 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 14:16:24.571908 2235858 logs.go:123] Gathering logs for dmesg ...
	I0414 14:16:24.571951 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 14:16:24.589579 2235858 logs.go:123] Gathering logs for describe nodes ...
	I0414 14:16:24.589614 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 14:16:24.680606 2235858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 14:16:24.680637 2235858 logs.go:123] Gathering logs for CRI-O ...
	I0414 14:16:24.680659 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 14:16:24.800813 2235858 logs.go:123] Gathering logs for container status ...
	I0414 14:16:24.800859 2235858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 14:16:24.849704 2235858 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 14:16:24.849777 2235858 out.go:270] * 
	W0414 14:16:24.849842 2235858 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.849868 2235858 out.go:270] * 
	W0414 14:16:24.851036 2235858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:16:24.854829 2235858 out.go:201] 
	W0414 14:16:24.856198 2235858 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 14:16:24.856246 2235858 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 14:16:24.856269 2235858 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 14:16:24.857740 2235858 out.go:201] 
	I0414 14:16:24.173925 2246921 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:16:24.173948 2246921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:16:24.173970 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:24.176982 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.177524 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:24.177544 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.177698 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:24.177872 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:24.178021 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:24.178136 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:24.178979 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I0414 14:16:24.179319 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.179745 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.179764 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.180045 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.180622 2246921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:16:24.180659 2246921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:16:24.200932 2246921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0414 14:16:24.201524 2246921 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:16:24.202218 2246921 main.go:141] libmachine: Using API Version  1
	I0414 14:16:24.202248 2246921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:16:24.202575 2246921 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:16:24.202815 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetState
	I0414 14:16:24.204228 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .DriverName
	I0414 14:16:24.204442 2246921 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:16:24.204458 2246921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:16:24.204476 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHHostname
	I0414 14:16:24.207373 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.207818 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:5c:90", ip: ""} in network mk-enable-default-cni-793608: {Iface:virbr2 ExpiryTime:2025-04-14 15:15:53 +0000 UTC Type:0 Mac:52:54:00:17:5c:90 Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:enable-default-cni-793608 Clientid:01:52:54:00:17:5c:90}
	I0414 14:16:24.207841 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | domain enable-default-cni-793608 has defined IP address 192.168.61.51 and MAC address 52:54:00:17:5c:90 in network mk-enable-default-cni-793608
	I0414 14:16:24.207987 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHPort
	I0414 14:16:24.208140 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHKeyPath
	I0414 14:16:24.208270 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .GetSSHUsername
	I0414 14:16:24.208396 2246921 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/enable-default-cni-793608/id_rsa Username:docker}
	I0414 14:16:24.524660 2246921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:16:24.524689 2246921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:16:24.614615 2246921 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-793608" to be "Ready" ...
	I0414 14:16:24.623835 2246921 node_ready.go:49] node "enable-default-cni-793608" has status "Ready":"True"
	I0414 14:16:24.623859 2246921 node_ready.go:38] duration metric: took 9.186236ms for node "enable-default-cni-793608" to be "Ready" ...
	I0414 14:16:24.623871 2246921 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:24.633247 2246921 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:24.697336 2246921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:16:24.705511 2246921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:16:25.562908 2246921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.038193704s)
	I0414 14:16:25.562944 2246921 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 14:16:25.563020 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.563047 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.563371 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.563384 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:25.563393 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.563400 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.563838 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:25.563905 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.563929 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:25.594180 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:25.594204 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:25.594584 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:25.594592 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:25.594607 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.077943 2246921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-793608" context rescaled to 1 replicas
	I0414 14:16:26.100977 2246921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395415307s)
	I0414 14:16:26.101044 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:26.101056 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:26.101405 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:26.101421 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.101430 2246921 main.go:141] libmachine: Making call to close driver server
	I0414 14:16:26.101438 2246921 main.go:141] libmachine: (enable-default-cni-793608) Calling .Close
	I0414 14:16:26.101450 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:26.101672 2246921 main.go:141] libmachine: (enable-default-cni-793608) DBG | Closing plugin on server side
	I0414 14:16:26.101713 2246921 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:16:26.101726 2246921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:16:26.103632 2246921 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 14:16:25.005086 2245195 pod_ready.go:93] pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.005120 2245195 pod_ready.go:82] duration metric: took 14.010099956s for pod "coredns-668d6bf9bc-hts2b" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.005134 2245195 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.013539 2245195 pod_ready.go:93] pod "etcd-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.013565 2245195 pod_ready.go:82] duration metric: took 8.422542ms for pod "etcd-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.013579 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.021742 2245195 pod_ready.go:93] pod "kube-apiserver-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.021769 2245195 pod_ready.go:82] duration metric: took 8.182307ms for pod "kube-apiserver-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.021783 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.028866 2245195 pod_ready.go:93] pod "kube-controller-manager-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.028895 2245195 pod_ready.go:82] duration metric: took 7.104091ms for pod "kube-controller-manager-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.028917 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-l2wdq" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.038885 2245195 pod_ready.go:93] pod "kube-proxy-l2wdq" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.038913 2245195 pod_ready.go:82] duration metric: took 9.98732ms for pod "kube-proxy-l2wdq" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.038926 2245195 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.399809 2245195 pod_ready.go:93] pod "kube-scheduler-flannel-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:16:25.399834 2245195 pod_ready.go:82] duration metric: took 360.900191ms for pod "kube-scheduler-flannel-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:25.399846 2245195 pod_ready.go:39] duration metric: took 14.420128309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:16:25.399864 2245195 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:16:25.399918 2245195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:16:25.417850 2245195 api_server.go:72] duration metric: took 26.777514906s to wait for apiserver process to appear ...
	I0414 14:16:25.417883 2245195 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:16:25.417903 2245195 api_server.go:253] Checking apiserver healthz at https://192.168.72.179:8443/healthz ...
	I0414 14:16:25.424022 2245195 api_server.go:279] https://192.168.72.179:8443/healthz returned 200:
	ok
	I0414 14:16:25.425022 2245195 api_server.go:141] control plane version: v1.32.2
	I0414 14:16:25.425045 2245195 api_server.go:131] duration metric: took 7.153666ms to wait for apiserver health ...
	I0414 14:16:25.425055 2245195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:16:25.602605 2245195 system_pods.go:59] 7 kube-system pods found
	I0414 14:16:25.602662 2245195 system_pods.go:61] "coredns-668d6bf9bc-hts2b" [f50a2820-c4ea-48f9-af3f-66436de96f27] Running
	I0414 14:16:25.602673 2245195 system_pods.go:61] "etcd-flannel-793608" [0b5f8d64-b6c5-4c4c-9b48-b85697bb07b2] Running
	I0414 14:16:25.602680 2245195 system_pods.go:61] "kube-apiserver-flannel-793608" [5576dc53-7585-4a6b-bb8e-c42042292362] Running
	I0414 14:16:25.602688 2245195 system_pods.go:61] "kube-controller-manager-flannel-793608" [9d76aa30-9b55-48da-a5dd-cedc72aa8ce1] Running
	I0414 14:16:25.602703 2245195 system_pods.go:61] "kube-proxy-l2wdq" [da2a410f-f489-4449-b993-b45c7b21f670] Running
	I0414 14:16:25.602710 2245195 system_pods.go:61] "kube-scheduler-flannel-793608" [2f7bc4a7-326d-4068-b695-5c875b074669] Running
	I0414 14:16:25.602726 2245195 system_pods.go:61] "storage-provisioner" [ae27af3a-026b-498c-b411-2b7089e276bf] Running
	I0414 14:16:25.602736 2245195 system_pods.go:74] duration metric: took 177.67285ms to wait for pod list to return data ...
	I0414 14:16:25.602753 2245195 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:16:25.800258 2245195 default_sa.go:45] found service account: "default"
	I0414 14:16:25.800293 2245195 default_sa.go:55] duration metric: took 197.529406ms for default service account to be created ...
	I0414 14:16:25.800304 2245195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:16:26.000548 2245195 system_pods.go:86] 7 kube-system pods found
	I0414 14:16:26.000595 2245195 system_pods.go:89] "coredns-668d6bf9bc-hts2b" [f50a2820-c4ea-48f9-af3f-66436de96f27] Running
	I0414 14:16:26.000605 2245195 system_pods.go:89] "etcd-flannel-793608" [0b5f8d64-b6c5-4c4c-9b48-b85697bb07b2] Running
	I0414 14:16:26.000612 2245195 system_pods.go:89] "kube-apiserver-flannel-793608" [5576dc53-7585-4a6b-bb8e-c42042292362] Running
	I0414 14:16:26.000619 2245195 system_pods.go:89] "kube-controller-manager-flannel-793608" [9d76aa30-9b55-48da-a5dd-cedc72aa8ce1] Running
	I0414 14:16:26.000625 2245195 system_pods.go:89] "kube-proxy-l2wdq" [da2a410f-f489-4449-b993-b45c7b21f670] Running
	I0414 14:16:26.000631 2245195 system_pods.go:89] "kube-scheduler-flannel-793608" [2f7bc4a7-326d-4068-b695-5c875b074669] Running
	I0414 14:16:26.000637 2245195 system_pods.go:89] "storage-provisioner" [ae27af3a-026b-498c-b411-2b7089e276bf] Running
	I0414 14:16:26.000650 2245195 system_pods.go:126] duration metric: took 200.337178ms to wait for k8s-apps to be running ...
	I0414 14:16:26.000661 2245195 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:16:26.000754 2245195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:16:26.019676 2245195 system_svc.go:56] duration metric: took 19.001248ms WaitForService to wait for kubelet
	I0414 14:16:26.019718 2245195 kubeadm.go:582] duration metric: took 27.379387997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:16:26.019745 2245195 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:16:26.200273 2245195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:16:26.200312 2245195 node_conditions.go:123] node cpu capacity is 2
	I0414 14:16:26.200331 2245195 node_conditions.go:105] duration metric: took 180.579715ms to run NodePressure ...
	I0414 14:16:26.200347 2245195 start.go:241] waiting for startup goroutines ...
	I0414 14:16:26.200357 2245195 start.go:246] waiting for cluster config update ...
	I0414 14:16:26.200371 2245195 start.go:255] writing updated cluster config ...
	I0414 14:16:26.200750 2245195 ssh_runner.go:195] Run: rm -f paused
	I0414 14:16:26.255354 2245195 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:16:26.257685 2245195 out.go:177] * Done! kubectl is now configured to use "flannel-793608" cluster and "default" namespace by default
	I0414 14:16:26.104814 2246921 addons.go:514] duration metric: took 1.975843927s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 14:16:26.639710 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:28.640070 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:31.138255 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:33.139626 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:35.639235 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:36.638566 2246921 pod_ready.go:98] pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:36 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.51 HostIPs:[{IP:192.168.61.
51}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 14:16:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 14:16:26 +0000 UTC,FinishedAt:2025-04-14 14:16:36 +0000 UTC,ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152 Started:0xc00167d900 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ddfa60} {Name:kube-api-access-vl56x MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001ddfa90}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 14:16:36.638594 2246921 pod_ready.go:82] duration metric: took 12.005319047s for pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace to be "Ready" ...
	E0414 14:16:36.638605 2246921 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-8vsj5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:36 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 14:16:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.51 HostIPs:[{IP:192.168.61.51}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 14:16:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 14:16:26 +0000 UTC,FinishedAt:2025-04-14 14:16:36 +0000 UTC,ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bdc92b8cf72dd46966f75e5f06abf6cdb4bfd8aa34caa570309836c58cf89152 Started:0xc00167d900 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ddfa60} {Name:kube-api-access-vl56x MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001ddfa90}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 14:16:36.638622 2246921 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace to be "Ready" ...
	I0414 14:16:38.644543 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:41.144440 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:43.644216 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:46.143915 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:48.144952 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:50.645522 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:53.144897 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:55.145252 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:16:57.644328 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:17:00.145274 2246921 pod_ready.go:103] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:17:02.151799 2246921 pod_ready.go:93] pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.151822 2246921 pod_ready.go:82] duration metric: took 25.513193873s for pod "coredns-668d6bf9bc-jbt4j" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.151833 2246921 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.156058 2246921 pod_ready.go:93] pod "etcd-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.156076 2246921 pod_ready.go:82] duration metric: took 4.237594ms for pod "etcd-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.156085 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.159224 2246921 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.159240 2246921 pod_ready.go:82] duration metric: took 3.150225ms for pod "kube-apiserver-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.159250 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.162485 2246921 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.162505 2246921 pod_ready.go:82] duration metric: took 3.248888ms for pod "kube-controller-manager-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.162513 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-ztqkc" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.165500 2246921 pod_ready.go:93] pod "kube-proxy-ztqkc" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.165515 2246921 pod_ready.go:82] duration metric: took 2.997241ms for pod "kube-proxy-ztqkc" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.165524 2246921 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.542630 2246921 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace has status "Ready":"True"
	I0414 14:17:02.542653 2246921 pod_ready.go:82] duration metric: took 377.123651ms for pod "kube-scheduler-enable-default-cni-793608" in "kube-system" namespace to be "Ready" ...
	I0414 14:17:02.542661 2246921 pod_ready.go:39] duration metric: took 37.918773646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:17:02.542677 2246921 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:17:02.542724 2246921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:17:02.558059 2246921 api_server.go:72] duration metric: took 38.429144648s to wait for apiserver process to appear ...
	I0414 14:17:02.558091 2246921 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:17:02.558115 2246921 api_server.go:253] Checking apiserver healthz at https://192.168.61.51:8443/healthz ...
	I0414 14:17:02.562804 2246921 api_server.go:279] https://192.168.61.51:8443/healthz returned 200:
	ok
	I0414 14:17:02.563889 2246921 api_server.go:141] control plane version: v1.32.2
	I0414 14:17:02.563911 2246921 api_server.go:131] duration metric: took 5.813659ms to wait for apiserver health ...
	I0414 14:17:02.563919 2246921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:17:02.745213 2246921 system_pods.go:59] 7 kube-system pods found
	I0414 14:17:02.745247 2246921 system_pods.go:61] "coredns-668d6bf9bc-jbt4j" [b142d5d4-3ab2-450b-8396-cafb1d00b2a3] Running
	I0414 14:17:02.745252 2246921 system_pods.go:61] "etcd-enable-default-cni-793608" [6aced462-ff90-40c9-b55c-3217fb8d2cfb] Running
	I0414 14:17:02.745257 2246921 system_pods.go:61] "kube-apiserver-enable-default-cni-793608" [57eb96df-a3c9-4e5e-b3f4-03cbaf559917] Running
	I0414 14:17:02.745261 2246921 system_pods.go:61] "kube-controller-manager-enable-default-cni-793608" [e5d364e8-3779-4cf7-ac59-54cbe5bc055d] Running
	I0414 14:17:02.745265 2246921 system_pods.go:61] "kube-proxy-ztqkc" [4a64fc36-d13b-4c5c-9bf1-17dd88ef4d34] Running
	I0414 14:17:02.745268 2246921 system_pods.go:61] "kube-scheduler-enable-default-cni-793608" [95e190b4-7390-4388-85eb-85157648e866] Running
	I0414 14:17:02.745271 2246921 system_pods.go:61] "storage-provisioner" [8666788b-504e-4f32-8dd5-c4da6070f943] Running
	I0414 14:17:02.745278 2246921 system_pods.go:74] duration metric: took 181.352893ms to wait for pod list to return data ...
	I0414 14:17:02.745285 2246921 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:17:02.942681 2246921 default_sa.go:45] found service account: "default"
	I0414 14:17:02.942711 2246921 default_sa.go:55] duration metric: took 197.418865ms for default service account to be created ...
	I0414 14:17:02.942721 2246921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:17:03.144283 2246921 system_pods.go:86] 7 kube-system pods found
	I0414 14:17:03.144315 2246921 system_pods.go:89] "coredns-668d6bf9bc-jbt4j" [b142d5d4-3ab2-450b-8396-cafb1d00b2a3] Running
	I0414 14:17:03.144320 2246921 system_pods.go:89] "etcd-enable-default-cni-793608" [6aced462-ff90-40c9-b55c-3217fb8d2cfb] Running
	I0414 14:17:03.144324 2246921 system_pods.go:89] "kube-apiserver-enable-default-cni-793608" [57eb96df-a3c9-4e5e-b3f4-03cbaf559917] Running
	I0414 14:17:03.144329 2246921 system_pods.go:89] "kube-controller-manager-enable-default-cni-793608" [e5d364e8-3779-4cf7-ac59-54cbe5bc055d] Running
	I0414 14:17:03.144332 2246921 system_pods.go:89] "kube-proxy-ztqkc" [4a64fc36-d13b-4c5c-9bf1-17dd88ef4d34] Running
	I0414 14:17:03.144336 2246921 system_pods.go:89] "kube-scheduler-enable-default-cni-793608" [95e190b4-7390-4388-85eb-85157648e866] Running
	I0414 14:17:03.144339 2246921 system_pods.go:89] "storage-provisioner" [8666788b-504e-4f32-8dd5-c4da6070f943] Running
	I0414 14:17:03.144348 2246921 system_pods.go:126] duration metric: took 201.619782ms to wait for k8s-apps to be running ...
	I0414 14:17:03.144358 2246921 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:17:03.144414 2246921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:17:03.162718 2246921 system_svc.go:56] duration metric: took 18.34957ms WaitForService to wait for kubelet
	I0414 14:17:03.162746 2246921 kubeadm.go:582] duration metric: took 39.033839006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:17:03.162764 2246921 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:17:03.346739 2246921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:17:03.346770 2246921 node_conditions.go:123] node cpu capacity is 2
	I0414 14:17:03.346784 2246921 node_conditions.go:105] duration metric: took 184.014842ms to run NodePressure ...
	I0414 14:17:03.346796 2246921 start.go:241] waiting for startup goroutines ...
	I0414 14:17:03.346803 2246921 start.go:246] waiting for cluster config update ...
	I0414 14:17:03.346813 2246921 start.go:255] writing updated cluster config ...
	I0414 14:17:03.347081 2246921 ssh_runner.go:195] Run: rm -f paused
	I0414 14:17:03.396319 2246921 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:17:03.399139 2246921 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-793608" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.741554979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744641067741536185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59458136-33b7-44ef-afa5-ed8ec31eb6a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.742157125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2142804-2209-4088-aa6f-4cd6f7c24134 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.742209541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2142804-2209-4088-aa6f-4cd6f7c24134 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.742256077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c2142804-2209-4088-aa6f-4cd6f7c24134 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.774696327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9831b5e2-429a-4fd5-bab3-77ba55f441c0 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.774784962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9831b5e2-429a-4fd5-bab3-77ba55f441c0 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.775777349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79d27675-7bd0-446a-b793-d2f68a3d6492 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.776263376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744641067776233747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79d27675-7bd0-446a-b793-d2f68a3d6492 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.776813920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13e40bbe-72cf-4072-8cb0-2f46e0bcaa51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.776861866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13e40bbe-72cf-4072-8cb0-2f46e0bcaa51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.776891578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=13e40bbe-72cf-4072-8cb0-2f46e0bcaa51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.806237884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c2399ec-0327-44c7-9788-86f6652a5f8c name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.806308949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c2399ec-0327-44c7-9788-86f6652a5f8c name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.807671099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=606703dd-e1d6-42e3-9f20-42bbd8c6efcf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.808109174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744641067808022741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=606703dd-e1d6-42e3-9f20-42bbd8c6efcf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.808720031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76b77673-ce3a-4a8f-84a8-24cdd8cf0571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.808772443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76b77673-ce3a-4a8f-84a8-24cdd8cf0571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.808803491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=76b77673-ce3a-4a8f-84a8-24cdd8cf0571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.838263339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22677380-1144-49b7-bd38-3c3d170d0de6 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.838336070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22677380-1144-49b7-bd38-3c3d170d0de6 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.839253798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dd34e6e-fb4e-4c01-a8bc-3068dd0f42b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.839619626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744641067839596296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dd34e6e-fb4e-4c01-a8bc-3068dd0f42b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.840163657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f95e9758-4dff-4025-abc9-224df2f3822f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.840215509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f95e9758-4dff-4025-abc9-224df2f3822f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:31:07 old-k8s-version-954411 crio[632]: time="2025-04-14 14:31:07.840249259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f95e9758-4dff-4025-abc9-224df2f3822f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 14:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055482] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043064] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr14 14:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.836790] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.084082] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.058063] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072240] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.169515] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.152610] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.265682] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +8.281644] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060503] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.889080] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.358788] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 14:12] systemd-fstab-generator[5015]: Ignoring "noauto" option for root device
	[Apr14 14:14] systemd-fstab-generator[5297]: Ignoring "noauto" option for root device
	[  +0.108430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:31:08 up 23 min,  0 users,  load average: 0.03, 0.02, 0.00
	Linux old-k8s-version-954411 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: goroutine 151 [runnable]:
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000246fc0, 0xc0000a60c0)
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: goroutine 134 [select]:
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000547db0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c0e360, 0x0, 0x0)
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000a46e00)
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 14 14:31:04 old-k8s-version-954411 kubelet[7095]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 14 14:31:04 old-k8s-version-954411 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 14:31:04 old-k8s-version-954411 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 14:31:05 old-k8s-version-954411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 173.
	Apr 14 14:31:05 old-k8s-version-954411 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 14:31:05 old-k8s-version-954411 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 14:31:05 old-k8s-version-954411 kubelet[7105]: I0414 14:31:05.531359    7105 server.go:416] Version: v1.20.0
	Apr 14 14:31:05 old-k8s-version-954411 kubelet[7105]: I0414 14:31:05.531678    7105 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 14:31:05 old-k8s-version-954411 kubelet[7105]: I0414 14:31:05.533640    7105 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 14:31:05 old-k8s-version-954411 kubelet[7105]: I0414 14:31:05.534854    7105 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 14 14:31:05 old-k8s-version-954411 kubelet[7105]: W0414 14:31:05.534985    7105 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 2 (221.676248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-954411" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (340.14s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 32.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 16.81
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.14
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 101.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 204.3
31 TestAddons/serial/GCPAuth/Namespaces 1.5
32 TestAddons/serial/GCPAuth/FakeCredentials 14.5
35 TestAddons/parallel/Registry 23.63
37 TestAddons/parallel/InspektorGadget 12.22
38 TestAddons/parallel/MetricsServer 7.5
40 TestAddons/parallel/CSI 69.94
41 TestAddons/parallel/Headlamp 20.71
42 TestAddons/parallel/CloudSpanner 5.64
43 TestAddons/parallel/LocalPath 56.32
44 TestAddons/parallel/NvidiaDevicePlugin 6.62
45 TestAddons/parallel/Yakd 11.74
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 49.43
49 TestCertExpiration 305.76
51 TestForceSystemdFlag 48.57
52 TestForceSystemdEnv 71.1
54 TestKVMDriverInstallOrUpdate 7.95
58 TestErrorSpam/setup 42.12
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 4.8
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.89
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.35
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.33
75 TestFunctional/serial/CacheCmd/cache/add_local 2.77
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.85
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.8
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 15.53
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.91
97 TestFunctional/parallel/ServiceCmdConnect 24.47
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 51.99
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.35
103 TestFunctional/parallel/MySQL 23.32
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.37
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
113 TestFunctional/parallel/License 0.81
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.6
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 6.38
121 TestFunctional/parallel/ImageCommands/Setup 2.42
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
133 TestFunctional/parallel/ProfileCmd/profile_list 0.34
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.57
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.15
141 TestFunctional/parallel/ImageCommands/ImageRemove 1.06
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.43
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.18
145 TestFunctional/parallel/MountCmd/any-port 14.63
146 TestFunctional/parallel/ServiceCmd/List 0.52
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
149 TestFunctional/parallel/ServiceCmd/Format 0.29
150 TestFunctional/parallel/ServiceCmd/URL 0.3
151 TestFunctional/parallel/MountCmd/specific-port 1.69
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 203.07
161 TestMultiControlPlane/serial/DeployApp 12.33
162 TestMultiControlPlane/serial/PingHostFromPods 1.26
163 TestMultiControlPlane/serial/AddWorkerNode 60.17
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
166 TestMultiControlPlane/serial/CopyFile 13.26
167 TestMultiControlPlane/serial/StopSecondaryNode 91.65
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 47.19
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 449.57
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.47
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/StopCluster 272.98
175 TestMultiControlPlane/serial/RestartCluster 99.27
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
177 TestMultiControlPlane/serial/AddSecondaryNode 82.59
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
182 TestJSONOutput/start/Command 58.58
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.73
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.36
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 85.75
214 TestMountStart/serial/StartWithMountFirst 29.96
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 31.4
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.68
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 2.28
221 TestMountStart/serial/RestartStopped 23.56
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 118.19
226 TestMultiNode/serial/DeployApp2Nodes 10.62
227 TestMultiNode/serial/PingHostFrom2Pods 0.8
228 TestMultiNode/serial/AddNode 54.68
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.59
231 TestMultiNode/serial/CopyFile 7.47
232 TestMultiNode/serial/StopNode 2.39
233 TestMultiNode/serial/StartAfterStop 40.32
234 TestMultiNode/serial/RestartKeepsNodes 343.96
235 TestMultiNode/serial/DeleteNode 2.7
236 TestMultiNode/serial/StopMultiNode 181.88
237 TestMultiNode/serial/RestartMultiNode 157.76
238 TestMultiNode/serial/ValidateNameConflict 47.43
245 TestScheduledStopUnix 115.74
249 TestRunningBinaryUpgrade 161.18
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 120.47
256 TestStoppedBinaryUpgrade/Setup 3.41
257 TestStoppedBinaryUpgrade/Upgrade 174.96
258 TestNoKubernetes/serial/StartWithStopK8s 18.31
259 TestNoKubernetes/serial/Start 46.11
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
261 TestNoKubernetes/serial/ProfileList 5.1
262 TestNoKubernetes/serial/Stop 1.51
263 TestNoKubernetes/serial/StartNoArgs 23.09
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
274 TestPause/serial/Start 91.18
282 TestNetworkPlugins/group/false 3.11
290 TestStartStop/group/no-preload/serial/FirstStart 130.95
292 TestStartStop/group/embed-certs/serial/FirstStart 97.01
293 TestStartStop/group/embed-certs/serial/DeployApp 15.29
294 TestStartStop/group/no-preload/serial/DeployApp 13.3
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
296 TestStartStop/group/embed-certs/serial/Stop 91.08
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.37
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
300 TestStartStop/group/no-preload/serial/Stop 91.3
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.3
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.12
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
305 TestStartStop/group/embed-certs/serial/SecondStart 329.28
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 364.73
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.08
312 TestStartStop/group/old-k8s-version/serial/Stop 2.3
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/embed-certs/serial/Pause 2.63
320 TestStartStop/group/newest-cni/serial/FirstStart 46.68
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/no-preload/serial/Pause 2.92
325 TestNetworkPlugins/group/auto/Start 89.42
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
328 TestStartStop/group/newest-cni/serial/Stop 10.74
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
330 TestStartStop/group/newest-cni/serial/SecondStart 53.96
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.31
335 TestNetworkPlugins/group/kindnet/Start 75.03
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 2.68
340 TestNetworkPlugins/group/calico/Start 89.27
341 TestNetworkPlugins/group/auto/KubeletFlags 0.24
342 TestNetworkPlugins/group/auto/NetCatPod 11.32
343 TestNetworkPlugins/group/auto/DNS 0.14
344 TestNetworkPlugins/group/auto/Localhost 0.12
345 TestNetworkPlugins/group/auto/HairPin 0.12
346 TestNetworkPlugins/group/custom-flannel/Start 74.8
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
349 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
350 TestNetworkPlugins/group/kindnet/DNS 0.16
351 TestNetworkPlugins/group/kindnet/Localhost 0.14
352 TestNetworkPlugins/group/kindnet/HairPin 0.12
353 TestNetworkPlugins/group/bridge/Start 58.9
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.24
356 TestNetworkPlugins/group/calico/NetCatPod 10.25
357 TestNetworkPlugins/group/calico/DNS 0.19
358 TestNetworkPlugins/group/calico/Localhost 0.13
359 TestNetworkPlugins/group/calico/HairPin 0.12
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
362 TestNetworkPlugins/group/flannel/Start 76.69
363 TestNetworkPlugins/group/custom-flannel/DNS 0.17
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
367 TestNetworkPlugins/group/bridge/NetCatPod 10.26
368 TestNetworkPlugins/group/enable-default-cni/Start 91.75
369 TestNetworkPlugins/group/bridge/DNS 0.18
370 TestNetworkPlugins/group/bridge/Localhost 0.13
371 TestNetworkPlugins/group/bridge/HairPin 0.13
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 11.21
376 TestNetworkPlugins/group/flannel/DNS 0.16
377 TestNetworkPlugins/group/flannel/Localhost 0.11
378 TestNetworkPlugins/group/flannel/HairPin 0.13
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (32.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-101341 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-101341 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (32.812830313s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (32.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 12:53:41.046386 2190400 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 12:53:41.046524 2190400 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-101341
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-101341: exit status 85 (61.919146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-101341 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |          |
	|         | -p download-only-101341        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:53:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:53:08.276722 2190412 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:53:08.277044 2190412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:08.277055 2190412 out.go:358] Setting ErrFile to fd 2...
	I0414 12:53:08.277059 2190412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:08.277293 2190412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	W0414 12:53:08.277450 2190412 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20623-2183077/.minikube/config/config.json: open /home/jenkins/minikube-integration/20623-2183077/.minikube/config/config.json: no such file or directory
	I0414 12:53:08.278119 2190412 out.go:352] Setting JSON to true
	I0414 12:53:08.279139 2190412 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":164127,"bootTime":1744471061,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:53:08.279262 2190412 start.go:139] virtualization: kvm guest
	I0414 12:53:08.281597 2190412 out.go:97] [download-only-101341] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 12:53:08.281740 2190412 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 12:53:08.281770 2190412 notify.go:220] Checking for updates...
	I0414 12:53:08.283522 2190412 out.go:169] MINIKUBE_LOCATION=20623
	I0414 12:53:08.284709 2190412 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:53:08.285807 2190412 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 12:53:08.287043 2190412 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 12:53:08.288309 2190412 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 12:53:08.290479 2190412 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 12:53:08.290718 2190412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:53:08.325373 2190412 out.go:97] Using the kvm2 driver based on user configuration
	I0414 12:53:08.325414 2190412 start.go:297] selected driver: kvm2
	I0414 12:53:08.325423 2190412 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:53:08.325752 2190412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:08.325848 2190412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:53:08.341414 2190412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:53:08.341467 2190412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:53:08.341963 2190412 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 12:53:08.342109 2190412 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 12:53:08.342144 2190412 cni.go:84] Creating CNI manager for ""
	I0414 12:53:08.342205 2190412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:53:08.342213 2190412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:53:08.342273 2190412 start.go:340] cluster config:
	{Name:download-only-101341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-101341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:53:08.342445 2190412 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:08.344715 2190412 out.go:97] Downloading VM boot image ...
	I0414 12:53:08.344772 2190412 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:53:21.396569 2190412 out.go:97] Starting "download-only-101341" primary control-plane node in "download-only-101341" cluster
	I0414 12:53:21.396618 2190412 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:53:21.557943 2190412 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 12:53:21.557978 2190412 cache.go:56] Caching tarball of preloaded images
	I0414 12:53:21.558180 2190412 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:53:21.560056 2190412 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 12:53:21.560084 2190412 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:53:21.714635 2190412 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-101341 host does not exist
	  To start a cluster, run: "minikube start -p download-only-101341"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-101341
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (16.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-897367 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-897367 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.814422181s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (16.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 12:53:58.196850 2190400 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 12:53:58.196914 2190400 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-897367
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-897367: exit status 85 (62.549376ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-101341 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |                     |
	|         | -p download-only-101341        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| delete  | -p download-only-101341        | download-only-101341 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC | 14 Apr 25 12:53 UTC |
	| start   | -o=json --download-only        | download-only-897367 | jenkins | v1.35.0 | 14 Apr 25 12:53 UTC |                     |
	|         | -p download-only-897367        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:53:41
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:53:41.421914 2190692 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:53:41.422048 2190692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:41.422059 2190692 out.go:358] Setting ErrFile to fd 2...
	I0414 12:53:41.422063 2190692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:53:41.422277 2190692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 12:53:41.423370 2190692 out.go:352] Setting JSON to true
	I0414 12:53:41.424566 2190692 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":164160,"bootTime":1744471061,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:53:41.424678 2190692 start.go:139] virtualization: kvm guest
	I0414 12:53:41.426436 2190692 out.go:97] [download-only-897367] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:53:41.426614 2190692 notify.go:220] Checking for updates...
	I0414 12:53:41.427748 2190692 out.go:169] MINIKUBE_LOCATION=20623
	I0414 12:53:41.429301 2190692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:53:41.430631 2190692 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 12:53:41.431810 2190692 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 12:53:41.433230 2190692 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 12:53:41.435530 2190692 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 12:53:41.435771 2190692 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:53:41.469074 2190692 out.go:97] Using the kvm2 driver based on user configuration
	I0414 12:53:41.469116 2190692 start.go:297] selected driver: kvm2
	I0414 12:53:41.469125 2190692 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:53:41.469489 2190692 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:41.469591 2190692 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20623-2183077/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:53:41.484994 2190692 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:53:41.485039 2190692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:53:41.485532 2190692 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 12:53:41.485694 2190692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 12:53:41.485726 2190692 cni.go:84] Creating CNI manager for ""
	I0414 12:53:41.485783 2190692 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:53:41.485797 2190692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:53:41.485856 2190692 start.go:340] cluster config:
	{Name:download-only-897367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-897367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:53:41.485979 2190692 iso.go:125] acquiring lock: {Name:mk1b6bc811d798b73231639961523f4c8d001a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:53:41.487396 2190692 out.go:97] Starting "download-only-897367" primary control-plane node in "download-only-897367" cluster
	I0414 12:53:41.487418 2190692 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:53:42.241024 2190692 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:53:42.241070 2190692 cache.go:56] Caching tarball of preloaded images
	I0414 12:53:42.241266 2190692 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:53:42.242948 2190692 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0414 12:53:42.242973 2190692 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:53:42.399363 2190692 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:53:56.393284 2190692 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:53:56.393390 2190692 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20623-2183077/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-897367 host does not exist
	  To start a cluster, run: "minikube start -p download-only-897367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-897367
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 12:53:58.799989 2190400 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-403481 --alsologtostderr --binary-mirror http://127.0.0.1:40467 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-403481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-403481
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (101.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-468991 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-468991 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.350715815s)
helpers_test.go:175: Cleaning up "offline-crio-468991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-468991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-468991: (1.162559453s)
--- PASS: TestOffline (101.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102056
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-102056: exit status 85 (58.176272ms)

                                                
                                                
-- stdout --
	* Profile "addons-102056" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102056"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102056
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-102056: exit status 85 (58.933476ms)

                                                
                                                
-- stdout --
	* Profile "addons-102056" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102056"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (204.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-102056 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-102056 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m24.299952772s)
--- PASS: TestAddons/Setup (204.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-102056 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-102056 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-102056 get secret gcp-auth -n new-namespace: exit status 1 (71.826721ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-102056 logs -l app=gcp-auth -n gcp-auth
I0414 12:57:24.290640 2190400 retry.go:31] will retry after 1.238453474s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/04/14 12:57:23 GCP Auth Webhook started!
	2025/04/14 12:57:24 Ready to marshal response ...
	2025/04/14 12:57:24 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-102056 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (14.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-102056 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-102056 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a187f3d8-a406-4837-afb8-81b2942133b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a187f3d8-a406-4837-afb8-81b2942133b1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 14.004133146s
addons_test.go:633: (dbg) Run:  kubectl --context addons-102056 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-102056 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-102056 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (14.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.67612ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-j2pj9" [465b5148-6e62-4e44-a183-a71768164039] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003865543s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sjhg2" [972e37a5-f490-4f90-aef0-cdf8b49da676] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002469002s
addons_test.go:331: (dbg) Run:  kubectl --context addons-102056 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-102056 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-102056 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.857953133s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7wqc4" [52f53f9f-57d8-4f4c-a36c-06b499190a6c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004334692s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable inspektor-gadget --alsologtostderr -v=1: (6.217986231s)
--- PASS: TestAddons/parallel/InspektorGadget (12.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.721734ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xdmb7" [26c6d03a-0446-4c50-8e92-c38f33472918] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003205325s
addons_test.go:402: (dbg) Run:  kubectl --context addons-102056 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable metrics-server --alsologtostderr -v=1: (1.406736605s)
--- PASS: TestAddons/parallel/MetricsServer (7.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 12:58:12.145365 2190400 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 12:58:12.150718 2190400 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 12:58:12.150744 2190400 kapi.go:107] duration metric: took 5.382993ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.392131ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-102056 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-102056 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bdac965a-90e4-426c-8090-8cc6553f3547] Pending
helpers_test.go:344: "task-pv-pod" [bdac965a-90e4-426c-8090-8cc6553f3547] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bdac965a-90e4-426c-8090-8cc6553f3547] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004097264s
addons_test.go:511: (dbg) Run:  kubectl --context addons-102056 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-102056 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-102056 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-102056 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-102056 delete pod task-pv-pod: (1.272503112s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-102056 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-102056 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-102056 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [46920e6e-9e40-48de-b16c-da16be58802c] Pending
helpers_test.go:344: "task-pv-pod-restore" [46920e6e-9e40-48de-b16c-da16be58802c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [46920e6e-9e40-48de-b16c-da16be58802c] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003710214s
addons_test.go:553: (dbg) Run:  kubectl --context addons-102056 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-102056 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-102056 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781250975s)
--- PASS: TestAddons/parallel/CSI (69.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-102056 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-7q8ss" [4eca3d10-ae2a-4d6c-831a-fcdcf2b1faff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-7q8ss" [4eca3d10-ae2a-4d6c-831a-fcdcf2b1faff] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004676432s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable headlamp --alsologtostderr -v=1: (5.828469549s)
--- PASS: TestAddons/parallel/Headlamp (20.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7768d7fbc5-9xn5s" [d2d4ced2-cd67-4cd9-8565-731532c47b80] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003814042s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-102056 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-102056 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/04/14 12:58:11 [DEBUG] GET http://192.168.39.15:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6b134908-f7f8-415f-a4e2-247139e4166b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6b134908-f7f8-415f-a4e2-247139e4166b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6b134908-f7f8-415f-a4e2-247139e4166b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006278461s
addons_test.go:906: (dbg) Run:  kubectl --context addons-102056 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 ssh "cat /opt/local-path-provisioner/pvc-df14fbd3-4cbb-489d-82fc-3b8f87697b3c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-102056 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-102056 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.5112429s)
--- PASS: TestAddons/parallel/LocalPath (56.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gxnwr" [f8c27fc5-108d-4c72-b7f8-84e3bba4a3f6] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003205228s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-94h44" [e5b12441-1811-4bbf-8b4f-24396f9d019d] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.010176193s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-102056 addons disable yakd --alsologtostderr -v=1: (5.726652759s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-102056
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-102056: (1m30.976144459s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102056
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102056
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-102056
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (49.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-567507 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0414 14:00:11.058250 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:27.987163 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-567507 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.13654076s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-567507 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-567507 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-567507 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-567507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-567507
--- PASS: TestCertOptions (49.43s)

                                                
                                    
x
+
TestCertExpiration (305.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-528114 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-528114 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.702908214s)
E0414 13:55:27.987037 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-528114 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-528114 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m20.219682851s)
helpers_test.go:175: Cleaning up "cert-expiration-528114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-528114
--- PASS: TestCertExpiration (305.76s)

                                                
                                    
x
+
TestForceSystemdFlag (48.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-509258 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-509258 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.231204028s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-509258 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-509258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-509258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-509258: (1.120730792s)
--- PASS: TestForceSystemdFlag (48.57s)

                                                
                                    
x
+
TestForceSystemdEnv (71.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-497484 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-497484 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.095586817s)
helpers_test.go:175: Cleaning up "force-systemd-env-497484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-497484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-497484: (1.00505411s)
--- PASS: TestForceSystemdEnv (71.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.95s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 14:00:58.920357 2190400 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:00:58.920531 2190400 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 14:00:58.952367 2190400 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 14:00:58.952509 2190400 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 14:00:58.952555 2190400 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2483901412/001/docker-machine-driver-kvm2
I0414 14:00:59.551760 2190400 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2483901412/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00054c8f8 gz:0xc00054ca30 tar:0xc00054c9a0 tar.bz2:0xc00054c9e0 tar.gz:0xc00054c9f0 tar.xz:0xc00054ca00 tar.zst:0xc00054ca20 tbz2:0xc00054c9e0 tgz:0xc00054c9f0 txz:0xc00054ca00 tzst:0xc00054ca20 xz:0xc00054ca38 zip:0xc00054ca40 zst:0xc00054ca50] Getters:map[file:0xc001ac6610 http:0xc000d1a2d0 https:0xc000d1a320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 14:00:59.551812 2190400 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2483901412/001/docker-machine-driver-kvm2
I0414 14:01:03.622081 2190400 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:01:03.622225 2190400 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 14:01:03.655224 2190400 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 14:01:03.655274 2190400 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 14:01:03.655353 2190400 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 14:01:03.655395 2190400 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2483901412/002/docker-machine-driver-kvm2
I0414 14:01:03.970017 2190400 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2483901412/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00054c8f8 gz:0xc00054ca30 tar:0xc00054c9a0 tar.bz2:0xc00054c9e0 tar.gz:0xc00054c9f0 tar.xz:0xc00054ca00 tar.zst:0xc00054ca20 tbz2:0xc00054c9e0 tgz:0xc00054c9f0 txz:0xc00054ca00 tzst:0xc00054ca20 xz:0xc00054ca38 zip:0xc00054ca40 zst:0xc00054ca50] Getters:map[file:0xc001ac6ee0 http:0xc000d1b220 https:0xc000d1b270] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 14:01:03.970063 2190400 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2483901412/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (7.95s)

                                                
                                    
x
+
TestErrorSpam/setup (42.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-070444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070444 --driver=kvm2  --container-runtime=crio
E0414 13:02:25.774741 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:25.784610 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:25.795955 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:25.817811 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:25.859219 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:25.940714 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:26.102382 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:26.424171 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:27.066322 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:28.348028 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:30.910923 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:36.032725 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:02:46.274522 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-070444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070444 --driver=kvm2  --container-runtime=crio: (42.122956026s)
--- PASS: TestErrorSpam/setup (42.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop: (2.319199964s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop: (1.071248392s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-070444 --log_dir /tmp/nospam-070444 stop: (1.406195261s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20623-2183077/.minikube/files/etc/test/nested/copy/2190400/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0414 13:03:06.756036 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:03:47.717598 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-891289 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.887550697s)
--- PASS: TestFunctional/serial/StartWithProxy (58.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 13:03:57.337339 2190400 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-891289 --alsologtostderr -v=8: (39.347099647s)
functional_test.go:680: soft start took 39.347752662s for "functional-891289" cluster.
I0414 13:04:36.684828 2190400 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (39.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-891289 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:3.1: (1.115127585s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:3.3: (1.098496799s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 cache add registry.k8s.io/pause:latest: (1.118883428s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-891289 /tmp/TestFunctionalserialCacheCmdcacheadd_local2495098269/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache add minikube-local-cache-test:functional-891289
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 cache add minikube-local-cache-test:functional-891289: (2.468915223s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache delete minikube-local-cache-test:functional-891289
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-891289
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.400112ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 cache reload: (1.045760787s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 kubectl -- --context functional-891289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-891289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 13:05:09.640930 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-891289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.846632378s)
functional_test.go:778: restart took 33.846781618s for "functional-891289" cluster.
I0414 13:05:19.188302 2190400 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (33.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-891289 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 logs: (1.489283001s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 logs --file /tmp/TestFunctionalserialLogsFileCmd226192932/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 logs --file /tmp/TestFunctionalserialLogsFileCmd226192932/001/logs.txt: (1.468688314s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-891289 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-891289
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-891289: exit status 115 (301.462321ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.223:31140 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-891289 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-891289 delete -f testdata/invalidsvc.yaml: (1.28381709s)
--- PASS: TestFunctional/serial/InvalidService (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 config get cpus: exit status 14 (61.860077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 config get cpus: exit status 14 (55.473772ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-891289 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-891289 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2199203: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-891289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.67134ms)

                                                
                                                
-- stdout --
	* [functional-891289] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:05:55.480693 2198910 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:05:55.480941 2198910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:05:55.480957 2198910 out.go:358] Setting ErrFile to fd 2...
	I0414 13:05:55.480961 2198910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:05:55.481134 2198910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:05:55.481689 2198910 out.go:352] Setting JSON to false
	I0414 13:05:55.482790 2198910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":164894,"bootTime":1744471061,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:05:55.482900 2198910 start.go:139] virtualization: kvm guest
	I0414 13:05:55.484556 2198910 out.go:177] * [functional-891289] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:05:55.485773 2198910 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 13:05:55.485798 2198910 notify.go:220] Checking for updates...
	I0414 13:05:55.488422 2198910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:05:55.489570 2198910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:05:55.490567 2198910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:05:55.491566 2198910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:05:55.492512 2198910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:05:55.494039 2198910 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:05:55.494459 2198910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:05:55.494531 2198910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:05:55.511780 2198910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0414 13:05:55.512311 2198910 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:05:55.512897 2198910 main.go:141] libmachine: Using API Version  1
	I0414 13:05:55.512926 2198910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:05:55.513404 2198910 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:05:55.513603 2198910 main.go:141] libmachine: (functional-891289) Calling .DriverName
	I0414 13:05:55.513864 2198910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:05:55.514233 2198910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:05:55.514291 2198910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:05:55.530438 2198910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37255
	I0414 13:05:55.530937 2198910 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:05:55.531374 2198910 main.go:141] libmachine: Using API Version  1
	I0414 13:05:55.531398 2198910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:05:55.531779 2198910 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:05:55.531959 2198910 main.go:141] libmachine: (functional-891289) Calling .DriverName
	I0414 13:05:55.565077 2198910 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 13:05:55.566230 2198910 start.go:297] selected driver: kvm2
	I0414 13:05:55.566244 2198910 start.go:901] validating driver "kvm2" against &{Name:functional-891289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-891289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:05:55.566338 2198910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:05:55.568180 2198910 out.go:201] 
	W0414 13:05:55.569356 2198910 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 13:05:55.570435 2198910 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-891289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-891289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.719308ms)

                                                
                                                
-- stdout --
	* [functional-891289] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:05:51.128893 2198530 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:05:51.129198 2198530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:05:51.129210 2198530 out.go:358] Setting ErrFile to fd 2...
	I0414 13:05:51.129217 2198530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:05:51.129496 2198530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:05:51.130048 2198530 out.go:352] Setting JSON to false
	I0414 13:05:51.131130 2198530 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":164890,"bootTime":1744471061,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:05:51.131241 2198530 start.go:139] virtualization: kvm guest
	I0414 13:05:51.132866 2198530 out.go:177] * [functional-891289] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 13:05:51.134399 2198530 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 13:05:51.134417 2198530 notify.go:220] Checking for updates...
	I0414 13:05:51.136456 2198530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:05:51.137648 2198530 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 13:05:51.138788 2198530 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 13:05:51.139910 2198530 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:05:51.141097 2198530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:05:51.142826 2198530 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:05:51.143300 2198530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:05:51.143400 2198530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:05:51.160243 2198530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0414 13:05:51.160702 2198530 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:05:51.161276 2198530 main.go:141] libmachine: Using API Version  1
	I0414 13:05:51.161304 2198530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:05:51.161735 2198530 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:05:51.162068 2198530 main.go:141] libmachine: (functional-891289) Calling .DriverName
	I0414 13:05:51.162383 2198530 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:05:51.162785 2198530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:05:51.162831 2198530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:05:51.177602 2198530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0414 13:05:51.178112 2198530 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:05:51.178619 2198530 main.go:141] libmachine: Using API Version  1
	I0414 13:05:51.178644 2198530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:05:51.179054 2198530 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:05:51.179280 2198530 main.go:141] libmachine: (functional-891289) Calling .DriverName
	I0414 13:05:51.212089 2198530 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 13:05:51.213440 2198530 start.go:297] selected driver: kvm2
	I0414 13:05:51.213462 2198530 start.go:901] validating driver "kvm2" against &{Name:functional-891289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-891289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:05:51.213562 2198530 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:05:51.215534 2198530 out.go:201] 
	W0414 13:05:51.216609 2198530 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 13:05:51.217696 2198530 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-891289 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-891289 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-b55cg" [a688a22a-fee3-4551-aae4-a9e0afa39aa0] Pending
helpers_test.go:344: "hello-node-connect-58f9cf68d8-b55cg" [a688a22a-fee3-4551-aae4-a9e0afa39aa0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-b55cg" [a688a22a-fee3-4551-aae4-a9e0afa39aa0] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.003917601s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.223:32746
functional_test.go:1692: http://192.168.39.223:32746: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-b55cg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.223:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.223:32746
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f4d6810e-6300-42f9-a8cc-52a3579718b7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005635274s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-891289 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-891289 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-891289 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-891289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c1ef364e-253b-4327-afd6-1c30d2547c2e] Pending
helpers_test.go:344: "sp-pod" [c1ef364e-253b-4327-afd6-1c30d2547c2e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c1ef364e-253b-4327-afd6-1c30d2547c2e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003967102s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-891289 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-891289 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-891289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0c6442c6-9407-4b31-a1ba-5375cc5efcc0] Pending
helpers_test.go:344: "sp-pod" [0c6442c6-9407-4b31-a1ba-5375cc5efcc0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0c6442c6-9407-4b31-a1ba-5375cc5efcc0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003480393s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-891289 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh -n functional-891289 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cp functional-891289:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2052677728/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh -n functional-891289 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh -n functional-891289 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-891289 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-fbntw" [4511f1f1-1b01-45fd-ad8c-253e9bdc7a8c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-fbntw" [4511f1f1-1b01-45fd-ad8c-253e9bdc7a8c] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.0025572s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-891289 exec mysql-58ccfd96bb-fbntw -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-891289 exec mysql-58ccfd96bb-fbntw -- mysql -ppassword -e "show databases;": exit status 1 (123.843061ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 13:05:49.112784 2190400 retry.go:31] will retry after 1.460809387s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-891289 exec mysql-58ccfd96bb-fbntw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/2190400/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /etc/test/nested/copy/2190400/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/2190400.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /etc/ssl/certs/2190400.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/2190400.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /usr/share/ca-certificates/2190400.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/21904002.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /etc/ssl/certs/21904002.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/21904002.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /usr/share/ca-certificates/21904002.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-891289 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "sudo systemctl is-active docker": exit status 1 (240.286551ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "sudo systemctl is-active containerd": exit status 1 (224.893505ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-891289 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-891289
localhost/kicbase/echo-server:functional-891289
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-891289 image ls --format short --alsologtostderr:
I0414 13:05:57.408809 2199228 out.go:345] Setting OutFile to fd 1 ...
I0414 13:05:57.409095 2199228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:57.409105 2199228 out.go:358] Setting ErrFile to fd 2...
I0414 13:05:57.409112 2199228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:57.409335 2199228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
I0414 13:05:57.409978 2199228 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:57.410112 2199228 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:57.410511 2199228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:57.410598 2199228 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:57.426720 2199228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
I0414 13:05:57.427249 2199228 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:57.427790 2199228 main.go:141] libmachine: Using API Version  1
I0414 13:05:57.427815 2199228 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:57.428217 2199228 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:57.428416 2199228 main.go:141] libmachine: (functional-891289) Calling .GetState
I0414 13:05:57.430546 2199228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:57.430596 2199228 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:57.446587 2199228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
I0414 13:05:57.447063 2199228 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:57.447536 2199228 main.go:141] libmachine: Using API Version  1
I0414 13:05:57.447561 2199228 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:57.447924 2199228 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:57.448113 2199228 main.go:141] libmachine: (functional-891289) Calling .DriverName
I0414 13:05:57.448310 2199228 ssh_runner.go:195] Run: systemctl --version
I0414 13:05:57.448335 2199228 main.go:141] libmachine: (functional-891289) Calling .GetSSHHostname
I0414 13:05:57.451245 2199228 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:57.451652 2199228 main.go:141] libmachine: (functional-891289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:ec:9a", ip: ""} in network mk-functional-891289: {Iface:virbr1 ExpiryTime:2025-04-14 14:03:13 +0000 UTC Type:0 Mac:52:54:00:25:ec:9a Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-891289 Clientid:01:52:54:00:25:ec:9a}
I0414 13:05:57.451684 2199228 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined IP address 192.168.39.223 and MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:57.451841 2199228 main.go:141] libmachine: (functional-891289) Calling .GetSSHPort
I0414 13:05:57.452059 2199228 main.go:141] libmachine: (functional-891289) Calling .GetSSHKeyPath
I0414 13:05:57.452240 2199228 main.go:141] libmachine: (functional-891289) Calling .GetSSHUsername
I0414 13:05:57.452398 2199228 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/functional-891289/id_rsa Username:docker}
I0414 13:05:57.534925 2199228 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:05:57.572267 2199228 main.go:141] libmachine: Making call to close driver server
I0414 13:05:57.572293 2199228 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:05:57.572580 2199228 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:05:57.572596 2199228 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:05:57.572604 2199228 main.go:141] libmachine: Making call to close driver server
I0414 13:05:57.572624 2199228 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
I0414 13:05:57.572690 2199228 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:05:57.572943 2199228 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:05:57.572965 2199228 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:05:57.572971 2199228 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-891289 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-891289  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-891289  | 7f268156ba838 | 3.33kB |
| localhost/my-image                      | functional-891289  | 239ee6ca67667 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 4cad75abc83d5 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-891289 image ls --format table --alsologtostderr:
I0414 13:06:04.448322 2199442 out.go:345] Setting OutFile to fd 1 ...
I0414 13:06:04.448443 2199442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:06:04.448454 2199442 out.go:358] Setting ErrFile to fd 2...
I0414 13:06:04.448460 2199442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:06:04.448672 2199442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
I0414 13:06:04.449340 2199442 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:06:04.449465 2199442 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:06:04.449939 2199442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:06:04.450026 2199442 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:06:04.466294 2199442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
I0414 13:06:04.466745 2199442 main.go:141] libmachine: () Calling .GetVersion
I0414 13:06:04.467318 2199442 main.go:141] libmachine: Using API Version  1
I0414 13:06:04.467344 2199442 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:06:04.467702 2199442 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:06:04.467896 2199442 main.go:141] libmachine: (functional-891289) Calling .GetState
I0414 13:06:04.469800 2199442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:06:04.469859 2199442 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:06:04.485092 2199442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
I0414 13:06:04.485543 2199442 main.go:141] libmachine: () Calling .GetVersion
I0414 13:06:04.486003 2199442 main.go:141] libmachine: Using API Version  1
I0414 13:06:04.486033 2199442 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:06:04.486387 2199442 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:06:04.486577 2199442 main.go:141] libmachine: (functional-891289) Calling .DriverName
I0414 13:06:04.486814 2199442 ssh_runner.go:195] Run: systemctl --version
I0414 13:06:04.486841 2199442 main.go:141] libmachine: (functional-891289) Calling .GetSSHHostname
I0414 13:06:04.489919 2199442 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:06:04.490301 2199442 main.go:141] libmachine: (functional-891289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:ec:9a", ip: ""} in network mk-functional-891289: {Iface:virbr1 ExpiryTime:2025-04-14 14:03:13 +0000 UTC Type:0 Mac:52:54:00:25:ec:9a Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-891289 Clientid:01:52:54:00:25:ec:9a}
I0414 13:06:04.490335 2199442 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined IP address 192.168.39.223 and MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:06:04.490469 2199442 main.go:141] libmachine: (functional-891289) Calling .GetSSHPort
I0414 13:06:04.490641 2199442 main.go:141] libmachine: (functional-891289) Calling .GetSSHKeyPath
I0414 13:06:04.490812 2199442 main.go:141] libmachine: (functional-891289) Calling .GetSSHUsername
I0414 13:06:04.490973 2199442 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/functional-891289/id_rsa Username:docker}
I0414 13:06:04.576308 2199442 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:06:04.616503 2199442 main.go:141] libmachine: Making call to close driver server
I0414 13:06:04.616530 2199442 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:04.616832 2199442 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:04.616858 2199442 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:04.616867 2199442 main.go:141] libmachine: Making call to close driver server
I0414 13:06:04.616875 2199442 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:04.616876 2199442 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
I0414 13:06:04.617113 2199442 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:04.617131 2199442 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:04.617158 2199442 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-891289 image ls --format json --alsologtostderr:
[{"id":"7f268156ba838a34d17581d3f5894934b548ddf1b0f1f83801714bc4ca43410b","repoDigests":["localhost/minikube-local-cache-test@sha256:d01f5c0b2a2fc0bc122389b929d2b4b6c673ab0c022f8177dc6fb90f0b116051"],"repoTags":["localhost/minikube-local-cache-test:functional-891289"],"size":"3330"},{"id":"239ee6ca67667750279160103e98af373bc28456cecc08815f993f10b41db2d2","repoDigests":["localhost/my-image@sha256:8117dc5313cf999ba5de12d4c669594ec67bcef54fcfbb545b1299209c68872e"],"repoTags":["localhost/my-image:functional-891289"],"size":"1468599"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"b3ae964476728c40ac463c4a391b7d01f770a2efb68db173af27d85bcb586228","repoDigests":["docker.io/librar
y/2e11f69aca1878957a7629c55493935a12fa21445322f7a14144cf05ee0fc729-tmp@sha256:02856f9a802042e93cd373762782bf0d5303cf8740f8b807a461b960ffe54529"],"repoTags":[],"size":"1466017"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/ki
ndnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kub
e-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":["docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab","docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca"],"
repoTags":["docker.io/library/nginx:latest"],"size":"196210580"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-891289"],"size":"4943877"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["r
egistry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce199
5460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-891289 image ls --format json --alsologtostderr:
I0414 13:06:04.227345 2199402 out.go:345] Setting OutFile to fd 1 ...
I0414 13:06:04.227607 2199402 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:06:04.227616 2199402 out.go:358] Setting ErrFile to fd 2...
I0414 13:06:04.227620 2199402 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:06:04.227818 2199402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
I0414 13:06:04.228366 2199402 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:06:04.228464 2199402 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:06:04.228904 2199402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:06:04.228973 2199402 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:06:04.245322 2199402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
I0414 13:06:04.245896 2199402 main.go:141] libmachine: () Calling .GetVersion
I0414 13:06:04.246481 2199402 main.go:141] libmachine: Using API Version  1
I0414 13:06:04.246504 2199402 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:06:04.246860 2199402 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:06:04.247061 2199402 main.go:141] libmachine: (functional-891289) Calling .GetState
I0414 13:06:04.248820 2199402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:06:04.248872 2199402 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:06:04.263955 2199402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
I0414 13:06:04.264443 2199402 main.go:141] libmachine: () Calling .GetVersion
I0414 13:06:04.264957 2199402 main.go:141] libmachine: Using API Version  1
I0414 13:06:04.264978 2199402 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:06:04.265350 2199402 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:06:04.265533 2199402 main.go:141] libmachine: (functional-891289) Calling .DriverName
I0414 13:06:04.265768 2199402 ssh_runner.go:195] Run: systemctl --version
I0414 13:06:04.265799 2199402 main.go:141] libmachine: (functional-891289) Calling .GetSSHHostname
I0414 13:06:04.268676 2199402 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:06:04.269152 2199402 main.go:141] libmachine: (functional-891289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:ec:9a", ip: ""} in network mk-functional-891289: {Iface:virbr1 ExpiryTime:2025-04-14 14:03:13 +0000 UTC Type:0 Mac:52:54:00:25:ec:9a Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-891289 Clientid:01:52:54:00:25:ec:9a}
I0414 13:06:04.269186 2199402 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined IP address 192.168.39.223 and MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:06:04.269369 2199402 main.go:141] libmachine: (functional-891289) Calling .GetSSHPort
I0414 13:06:04.269537 2199402 main.go:141] libmachine: (functional-891289) Calling .GetSSHKeyPath
I0414 13:06:04.269689 2199402 main.go:141] libmachine: (functional-891289) Calling .GetSSHUsername
I0414 13:06:04.269834 2199402 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/functional-891289/id_rsa Username:docker}
I0414 13:06:04.355789 2199402 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:06:04.396869 2199402 main.go:141] libmachine: Making call to close driver server
I0414 13:06:04.396891 2199402 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:04.397211 2199402 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:04.397228 2199402 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:04.397237 2199402 main.go:141] libmachine: Making call to close driver server
I0414 13:06:04.397244 2199402 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:04.397498 2199402 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:04.397513 2199402 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:04.397537 2199402 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-891289 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 7f268156ba838a34d17581d3f5894934b548ddf1b0f1f83801714bc4ca43410b
repoDigests:
- localhost/minikube-local-cache-test@sha256:d01f5c0b2a2fc0bc122389b929d2b4b6c673ab0c022f8177dc6fb90f0b116051
repoTags:
- localhost/minikube-local-cache-test:functional-891289
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
- docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca
repoTags:
- docker.io/library/nginx:latest
size: "196210580"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-891289
size: "4943877"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-891289 image ls --format yaml --alsologtostderr:
I0414 13:05:57.624197 2199252 out.go:345] Setting OutFile to fd 1 ...
I0414 13:05:57.624467 2199252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:57.624477 2199252 out.go:358] Setting ErrFile to fd 2...
I0414 13:05:57.624481 2199252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:57.624708 2199252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
I0414 13:05:57.625344 2199252 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:57.625456 2199252 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:57.625804 2199252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:57.625889 2199252 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:57.643584 2199252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
I0414 13:05:57.644109 2199252 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:57.644703 2199252 main.go:141] libmachine: Using API Version  1
I0414 13:05:57.644744 2199252 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:57.645110 2199252 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:57.645316 2199252 main.go:141] libmachine: (functional-891289) Calling .GetState
I0414 13:05:57.647171 2199252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:57.647218 2199252 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:57.663360 2199252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
I0414 13:05:57.663798 2199252 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:57.664275 2199252 main.go:141] libmachine: Using API Version  1
I0414 13:05:57.664295 2199252 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:57.664666 2199252 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:57.664873 2199252 main.go:141] libmachine: (functional-891289) Calling .DriverName
I0414 13:05:57.665106 2199252 ssh_runner.go:195] Run: systemctl --version
I0414 13:05:57.665143 2199252 main.go:141] libmachine: (functional-891289) Calling .GetSSHHostname
I0414 13:05:57.667810 2199252 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:57.668229 2199252 main.go:141] libmachine: (functional-891289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:ec:9a", ip: ""} in network mk-functional-891289: {Iface:virbr1 ExpiryTime:2025-04-14 14:03:13 +0000 UTC Type:0 Mac:52:54:00:25:ec:9a Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-891289 Clientid:01:52:54:00:25:ec:9a}
I0414 13:05:57.668261 2199252 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined IP address 192.168.39.223 and MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:57.668369 2199252 main.go:141] libmachine: (functional-891289) Calling .GetSSHPort
I0414 13:05:57.668511 2199252 main.go:141] libmachine: (functional-891289) Calling .GetSSHKeyPath
I0414 13:05:57.668638 2199252 main.go:141] libmachine: (functional-891289) Calling .GetSSHUsername
I0414 13:05:57.668786 2199252 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/functional-891289/id_rsa Username:docker}
I0414 13:05:57.752952 2199252 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:05:57.789877 2199252 main.go:141] libmachine: Making call to close driver server
I0414 13:05:57.789900 2199252 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:05:57.790185 2199252 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:05:57.790204 2199252 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:05:57.790212 2199252 main.go:141] libmachine: Making call to close driver server
I0414 13:05:57.790240 2199252 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
I0414 13:05:57.790293 2199252 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:05:57.790517 2199252 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:05:57.790532 2199252 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh pgrep buildkitd: exit status 1 (196.113494ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image build -t localhost/my-image:functional-891289 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 image build -t localhost/my-image:functional-891289 testdata/build --alsologtostderr: (5.959925648s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-891289 image build -t localhost/my-image:functional-891289 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b3ae9644767
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-891289
--> 239ee6ca676
Successfully tagged localhost/my-image:functional-891289
239ee6ca67667750279160103e98af373bc28456cecc08815f993f10b41db2d2
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-891289 image build -t localhost/my-image:functional-891289 testdata/build --alsologtostderr:
I0414 13:05:58.037324 2199306 out.go:345] Setting OutFile to fd 1 ...
I0414 13:05:58.037426 2199306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:58.037434 2199306 out.go:358] Setting ErrFile to fd 2...
I0414 13:05:58.037438 2199306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:05:58.037619 2199306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
I0414 13:05:58.038157 2199306 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:58.038779 2199306 config.go:182] Loaded profile config "functional-891289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 13:05:58.039125 2199306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:58.039171 2199306 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:58.055483 2199306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
I0414 13:05:58.056894 2199306 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:58.057470 2199306 main.go:141] libmachine: Using API Version  1
I0414 13:05:58.057499 2199306 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:58.057923 2199306 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:58.058128 2199306 main.go:141] libmachine: (functional-891289) Calling .GetState
I0414 13:05:58.059997 2199306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 13:05:58.060041 2199306 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:05:58.075769 2199306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
I0414 13:05:58.076217 2199306 main.go:141] libmachine: () Calling .GetVersion
I0414 13:05:58.076624 2199306 main.go:141] libmachine: Using API Version  1
I0414 13:05:58.076648 2199306 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:05:58.077065 2199306 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:05:58.077266 2199306 main.go:141] libmachine: (functional-891289) Calling .DriverName
I0414 13:05:58.077492 2199306 ssh_runner.go:195] Run: systemctl --version
I0414 13:05:58.077518 2199306 main.go:141] libmachine: (functional-891289) Calling .GetSSHHostname
I0414 13:05:58.080057 2199306 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:58.080499 2199306 main.go:141] libmachine: (functional-891289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:ec:9a", ip: ""} in network mk-functional-891289: {Iface:virbr1 ExpiryTime:2025-04-14 14:03:13 +0000 UTC Type:0 Mac:52:54:00:25:ec:9a Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-891289 Clientid:01:52:54:00:25:ec:9a}
I0414 13:05:58.080531 2199306 main.go:141] libmachine: (functional-891289) DBG | domain functional-891289 has defined IP address 192.168.39.223 and MAC address 52:54:00:25:ec:9a in network mk-functional-891289
I0414 13:05:58.080700 2199306 main.go:141] libmachine: (functional-891289) Calling .GetSSHPort
I0414 13:05:58.080865 2199306 main.go:141] libmachine: (functional-891289) Calling .GetSSHKeyPath
I0414 13:05:58.081002 2199306 main.go:141] libmachine: (functional-891289) Calling .GetSSHUsername
I0414 13:05:58.081136 2199306 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/functional-891289/id_rsa Username:docker}
I0414 13:05:58.169591 2199306 build_images.go:161] Building image from path: /tmp/build.3066197900.tar
I0414 13:05:58.169666 2199306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 13:05:58.185450 2199306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3066197900.tar
I0414 13:05:58.192249 2199306 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3066197900.tar: stat -c "%s %y" /var/lib/minikube/build/build.3066197900.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3066197900.tar': No such file or directory
I0414 13:05:58.192282 2199306 ssh_runner.go:362] scp /tmp/build.3066197900.tar --> /var/lib/minikube/build/build.3066197900.tar (3072 bytes)
I0414 13:05:58.217975 2199306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3066197900
I0414 13:05:58.227743 2199306 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3066197900 -xf /var/lib/minikube/build/build.3066197900.tar
I0414 13:05:58.237165 2199306 crio.go:315] Building image: /var/lib/minikube/build/build.3066197900
I0414 13:05:58.237251 2199306 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-891289 /var/lib/minikube/build/build.3066197900 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 13:06:03.909469 2199306 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-891289 /var/lib/minikube/build/build.3066197900 --cgroup-manager=cgroupfs: (5.67218537s)
I0414 13:06:03.909542 2199306 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3066197900
I0414 13:06:03.924225 2199306 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3066197900.tar
I0414 13:06:03.945337 2199306 build_images.go:217] Built localhost/my-image:functional-891289 from /tmp/build.3066197900.tar
I0414 13:06:03.945388 2199306 build_images.go:133] succeeded building to: functional-891289
I0414 13:06:03.945395 2199306 build_images.go:134] failed building to: 
I0414 13:06:03.945433 2199306 main.go:141] libmachine: Making call to close driver server
I0414 13:06:03.945446 2199306 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:03.945758 2199306 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
I0414 13:06:03.945803 2199306 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:03.945812 2199306 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:03.945826 2199306 main.go:141] libmachine: Making call to close driver server
I0414 13:06:03.945832 2199306 main.go:141] libmachine: (functional-891289) Calling .Close
I0414 13:06:03.946131 2199306 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:06:03.946159 2199306 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:06:03.946166 2199306 main.go:141] libmachine: (functional-891289) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.40148535s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-891289
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "285.307704ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.524354ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "292.64327ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "63.293265ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image load --daemon kicbase/echo-server:functional-891289 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 image load --daemon kicbase/echo-server:functional-891289 --alsologtostderr: (1.254970779s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image load --daemon kicbase/echo-server:functional-891289 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Done: docker pull kicbase/echo-server:latest: (1.14893951s)
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-891289
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image load --daemon kicbase/echo-server:functional-891289 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 image load --daemon kicbase/echo-server:functional-891289 --alsologtostderr: (3.170965303s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image save kicbase/echo-server:functional-891289 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 image save kicbase/echo-server:functional-891289 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.147657663s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image rm kicbase/echo-server:functional-891289 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-891289 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.13335955s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-891289
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 image save --daemon kicbase/echo-server:functional-891289 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-891289
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-891289 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-891289 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-zl7xd" [642fcb26-69cb-4668-bcd8-7135a9e0c9ec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-zl7xd" [642fcb26-69cb-4668-bcd8-7135a9e0c9ec] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003461364s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdany-port25136635/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744635951224269011" to /tmp/TestFunctionalparallelMountCmdany-port25136635/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744635951224269011" to /tmp/TestFunctionalparallelMountCmdany-port25136635/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744635951224269011" to /tmp/TestFunctionalparallelMountCmdany-port25136635/001/test-1744635951224269011
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.814501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:05:51.468477 2190400 retry.go:31] will retry after 491.564503ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 13:05 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 13:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 13:05 test-1744635951224269011
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh cat /mount-9p/test-1744635951224269011
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-891289 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [52ac3510-24ad-4ab5-a152-b0da3f5ce3b4] Pending
helpers_test.go:344: "busybox-mount" [52ac3510-24ad-4ab5-a152-b0da3f5ce3b4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [52ac3510-24ad-4ab5-a152-b0da3f5ce3b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [52ac3510-24ad-4ab5-a152-b0da3f5ce3b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.003817357s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-891289 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdany-port25136635/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service list -o json
functional_test.go:1511: Took "457.330397ms" to run "out/minikube-linux-amd64 -p functional-891289 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.223:30388
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.223:30388
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdspecific-port1167959071/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.364402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:06:06.099550 2190400 retry.go:31] will retry after 319.578919ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdspecific-port1167959071/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "sudo umount -f /mount-9p": exit status 1 (239.711524ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-891289 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdspecific-port1167959071/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T" /mount1: exit status 1 (298.972994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:06:07.842431 2190400 retry.go:31] will retry after 339.872401ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-891289 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-891289 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-891289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3662368037/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2025/04/14 13:06:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-891289
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-891289
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-891289
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956449 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 13:07:25.775101 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:07:53.482923 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-956449 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.399547637s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-956449 -- rollout status deployment/busybox: (10.127410798s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-26w9g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-5952h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-qrwzb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-26w9g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-5952h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-qrwzb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-26w9g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-5952h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-qrwzb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-26w9g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-26w9g -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-5952h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-5952h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-qrwzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956449 -- exec busybox-58667487b6-qrwzb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-956449 -v=7 --alsologtostderr
E0414 13:10:27.986255 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:27.992868 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.004338 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.026198 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.067835 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.149347 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.311409 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:28.632868 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:29.275034 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:30.556698 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:33.118638 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:38.240683 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:10:48.482176 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-956449 -v=7 --alsologtostderr: (59.313623456s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-956449 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp testdata/cp-test.txt ha-956449:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1625924337/001/cp-test_ha-956449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449:/home/docker/cp-test.txt ha-956449-m02:/home/docker/cp-test_ha-956449_ha-956449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test_ha-956449_ha-956449-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449:/home/docker/cp-test.txt ha-956449-m03:/home/docker/cp-test_ha-956449_ha-956449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test_ha-956449_ha-956449-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449:/home/docker/cp-test.txt ha-956449-m04:/home/docker/cp-test_ha-956449_ha-956449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test_ha-956449_ha-956449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp testdata/cp-test.txt ha-956449-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1625924337/001/cp-test_ha-956449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m02:/home/docker/cp-test.txt ha-956449:/home/docker/cp-test_ha-956449-m02_ha-956449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test_ha-956449-m02_ha-956449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m02:/home/docker/cp-test.txt ha-956449-m03:/home/docker/cp-test_ha-956449-m02_ha-956449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test_ha-956449-m02_ha-956449-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m02:/home/docker/cp-test.txt ha-956449-m04:/home/docker/cp-test_ha-956449-m02_ha-956449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test_ha-956449-m02_ha-956449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp testdata/cp-test.txt ha-956449-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1625924337/001/cp-test_ha-956449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m03:/home/docker/cp-test.txt ha-956449:/home/docker/cp-test_ha-956449-m03_ha-956449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test_ha-956449-m03_ha-956449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m03:/home/docker/cp-test.txt ha-956449-m02:/home/docker/cp-test_ha-956449-m03_ha-956449-m02.txt
E0414 13:11:08.963831 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test_ha-956449-m03_ha-956449-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m03:/home/docker/cp-test.txt ha-956449-m04:/home/docker/cp-test_ha-956449-m03_ha-956449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test_ha-956449-m03_ha-956449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp testdata/cp-test.txt ha-956449-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1625924337/001/cp-test_ha-956449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m04:/home/docker/cp-test.txt ha-956449:/home/docker/cp-test_ha-956449-m04_ha-956449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449 "sudo cat /home/docker/cp-test_ha-956449-m04_ha-956449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m04:/home/docker/cp-test.txt ha-956449-m02:/home/docker/cp-test_ha-956449-m04_ha-956449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m02 "sudo cat /home/docker/cp-test_ha-956449-m04_ha-956449-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 cp ha-956449-m04:/home/docker/cp-test.txt ha-956449-m03:/home/docker/cp-test_ha-956449-m04_ha-956449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 ssh -n ha-956449-m03 "sudo cat /home/docker/cp-test_ha-956449-m04_ha-956449-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 node stop m02 -v=7 --alsologtostderr
E0414 13:11:49.925211 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:12:25.774880 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-956449 node stop m02 -v=7 --alsologtostderr: (1m31.00197914s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr: exit status 7 (645.426266ms)

                                                
                                                
-- stdout --
	ha-956449
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956449-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956449-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956449-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:12:44.386597 2204581 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:12:44.386887 2204581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:12:44.386898 2204581 out.go:358] Setting ErrFile to fd 2...
	I0414 13:12:44.386907 2204581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:12:44.387081 2204581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:12:44.387249 2204581 out.go:352] Setting JSON to false
	I0414 13:12:44.387283 2204581 mustload.go:65] Loading cluster: ha-956449
	I0414 13:12:44.387417 2204581 notify.go:220] Checking for updates...
	I0414 13:12:44.387676 2204581 config.go:182] Loaded profile config "ha-956449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:12:44.387704 2204581 status.go:174] checking status of ha-956449 ...
	I0414 13:12:44.388161 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.388222 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.406102 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0414 13:12:44.406633 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.407277 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.407305 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.407631 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.407801 2204581 main.go:141] libmachine: (ha-956449) Calling .GetState
	I0414 13:12:44.409491 2204581 status.go:371] ha-956449 host status = "Running" (err=<nil>)
	I0414 13:12:44.409511 2204581 host.go:66] Checking if "ha-956449" exists ...
	I0414 13:12:44.409865 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.409934 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.425097 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0414 13:12:44.425509 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.425929 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.425963 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.426346 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.426513 2204581 main.go:141] libmachine: (ha-956449) Calling .GetIP
	I0414 13:12:44.429866 2204581 main.go:141] libmachine: (ha-956449) DBG | domain ha-956449 has defined MAC address 52:54:00:84:b7:d1 in network mk-ha-956449
	I0414 13:12:44.430327 2204581 main.go:141] libmachine: (ha-956449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b7:d1", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:06:37 +0000 UTC Type:0 Mac:52:54:00:84:b7:d1 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-956449 Clientid:01:52:54:00:84:b7:d1}
	I0414 13:12:44.430351 2204581 main.go:141] libmachine: (ha-956449) DBG | domain ha-956449 has defined IP address 192.168.39.186 and MAC address 52:54:00:84:b7:d1 in network mk-ha-956449
	I0414 13:12:44.430509 2204581 host.go:66] Checking if "ha-956449" exists ...
	I0414 13:12:44.430825 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.430867 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.446143 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0414 13:12:44.446605 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.447177 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.447202 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.447612 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.447791 2204581 main.go:141] libmachine: (ha-956449) Calling .DriverName
	I0414 13:12:44.448012 2204581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:12:44.448042 2204581 main.go:141] libmachine: (ha-956449) Calling .GetSSHHostname
	I0414 13:12:44.450771 2204581 main.go:141] libmachine: (ha-956449) DBG | domain ha-956449 has defined MAC address 52:54:00:84:b7:d1 in network mk-ha-956449
	I0414 13:12:44.451233 2204581 main.go:141] libmachine: (ha-956449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b7:d1", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:06:37 +0000 UTC Type:0 Mac:52:54:00:84:b7:d1 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-956449 Clientid:01:52:54:00:84:b7:d1}
	I0414 13:12:44.451261 2204581 main.go:141] libmachine: (ha-956449) DBG | domain ha-956449 has defined IP address 192.168.39.186 and MAC address 52:54:00:84:b7:d1 in network mk-ha-956449
	I0414 13:12:44.451433 2204581 main.go:141] libmachine: (ha-956449) Calling .GetSSHPort
	I0414 13:12:44.451621 2204581 main.go:141] libmachine: (ha-956449) Calling .GetSSHKeyPath
	I0414 13:12:44.451773 2204581 main.go:141] libmachine: (ha-956449) Calling .GetSSHUsername
	I0414 13:12:44.451905 2204581 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/ha-956449/id_rsa Username:docker}
	I0414 13:12:44.538990 2204581 ssh_runner.go:195] Run: systemctl --version
	I0414 13:12:44.546345 2204581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:12:44.562840 2204581 kubeconfig.go:125] found "ha-956449" server: "https://192.168.39.254:8443"
	I0414 13:12:44.562880 2204581 api_server.go:166] Checking apiserver status ...
	I0414 13:12:44.562925 2204581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:12:44.577357 2204581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0414 13:12:44.586342 2204581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:12:44.586398 2204581 ssh_runner.go:195] Run: ls
	I0414 13:12:44.591129 2204581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 13:12:44.597095 2204581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 13:12:44.597125 2204581 status.go:463] ha-956449 apiserver status = Running (err=<nil>)
	I0414 13:12:44.597137 2204581 status.go:176] ha-956449 status: &{Name:ha-956449 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:12:44.597152 2204581 status.go:174] checking status of ha-956449-m02 ...
	I0414 13:12:44.597548 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.597601 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.613567 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0414 13:12:44.613993 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.614406 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.614429 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.614814 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.615006 2204581 main.go:141] libmachine: (ha-956449-m02) Calling .GetState
	I0414 13:12:44.616774 2204581 status.go:371] ha-956449-m02 host status = "Stopped" (err=<nil>)
	I0414 13:12:44.616791 2204581 status.go:384] host is not running, skipping remaining checks
	I0414 13:12:44.616798 2204581 status.go:176] ha-956449-m02 status: &{Name:ha-956449-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:12:44.616817 2204581 status.go:174] checking status of ha-956449-m03 ...
	I0414 13:12:44.617152 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.617205 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.632386 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I0414 13:12:44.632848 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.633217 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.633235 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.633589 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.633817 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetState
	I0414 13:12:44.635371 2204581 status.go:371] ha-956449-m03 host status = "Running" (err=<nil>)
	I0414 13:12:44.635398 2204581 host.go:66] Checking if "ha-956449-m03" exists ...
	I0414 13:12:44.635715 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.635760 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.650544 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37885
	I0414 13:12:44.650906 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.651447 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.651466 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.651840 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.652017 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetIP
	I0414 13:12:44.654996 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | domain ha-956449-m03 has defined MAC address 52:54:00:7e:14:47 in network mk-ha-956449
	I0414 13:12:44.655449 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:14:47", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:08:39 +0000 UTC Type:0 Mac:52:54:00:7e:14:47 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-956449-m03 Clientid:01:52:54:00:7e:14:47}
	I0414 13:12:44.655482 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | domain ha-956449-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:7e:14:47 in network mk-ha-956449
	I0414 13:12:44.655615 2204581 host.go:66] Checking if "ha-956449-m03" exists ...
	I0414 13:12:44.655973 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.656010 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.670872 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0414 13:12:44.671291 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.671710 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.671735 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.672113 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.672295 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .DriverName
	I0414 13:12:44.672485 2204581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:12:44.672508 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetSSHHostname
	I0414 13:12:44.675345 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | domain ha-956449-m03 has defined MAC address 52:54:00:7e:14:47 in network mk-ha-956449
	I0414 13:12:44.675802 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:14:47", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:08:39 +0000 UTC Type:0 Mac:52:54:00:7e:14:47 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-956449-m03 Clientid:01:52:54:00:7e:14:47}
	I0414 13:12:44.675831 2204581 main.go:141] libmachine: (ha-956449-m03) DBG | domain ha-956449-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:7e:14:47 in network mk-ha-956449
	I0414 13:12:44.675979 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetSSHPort
	I0414 13:12:44.676153 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetSSHKeyPath
	I0414 13:12:44.676305 2204581 main.go:141] libmachine: (ha-956449-m03) Calling .GetSSHUsername
	I0414 13:12:44.676417 2204581 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/ha-956449-m03/id_rsa Username:docker}
	I0414 13:12:44.762242 2204581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:12:44.784460 2204581 kubeconfig.go:125] found "ha-956449" server: "https://192.168.39.254:8443"
	I0414 13:12:44.784491 2204581 api_server.go:166] Checking apiserver status ...
	I0414 13:12:44.784539 2204581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:12:44.799953 2204581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1498/cgroup
	W0414 13:12:44.810642 2204581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1498/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:12:44.810696 2204581 ssh_runner.go:195] Run: ls
	I0414 13:12:44.815630 2204581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 13:12:44.820257 2204581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 13:12:44.820287 2204581 status.go:463] ha-956449-m03 apiserver status = Running (err=<nil>)
	I0414 13:12:44.820298 2204581 status.go:176] ha-956449-m03 status: &{Name:ha-956449-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:12:44.820313 2204581 status.go:174] checking status of ha-956449-m04 ...
	I0414 13:12:44.820613 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.820659 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.837398 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0414 13:12:44.837883 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.838310 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.838335 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.838708 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.838893 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetState
	I0414 13:12:44.840342 2204581 status.go:371] ha-956449-m04 host status = "Running" (err=<nil>)
	I0414 13:12:44.840357 2204581 host.go:66] Checking if "ha-956449-m04" exists ...
	I0414 13:12:44.840662 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.840703 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.858063 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0414 13:12:44.858612 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.859045 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.859065 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.859405 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.859633 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetIP
	I0414 13:12:44.862362 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | domain ha-956449-m04 has defined MAC address 52:54:00:05:0e:54 in network mk-ha-956449
	I0414 13:12:44.862769 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:0e:54", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:10:15 +0000 UTC Type:0 Mac:52:54:00:05:0e:54 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-956449-m04 Clientid:01:52:54:00:05:0e:54}
	I0414 13:12:44.862796 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | domain ha-956449-m04 has defined IP address 192.168.39.3 and MAC address 52:54:00:05:0e:54 in network mk-ha-956449
	I0414 13:12:44.862935 2204581 host.go:66] Checking if "ha-956449-m04" exists ...
	I0414 13:12:44.863235 2204581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.863271 2204581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.878458 2204581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0414 13:12:44.878935 2204581 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.879437 2204581 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.879469 2204581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.879764 2204581 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.879951 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .DriverName
	I0414 13:12:44.880122 2204581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:12:44.880147 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetSSHHostname
	I0414 13:12:44.882898 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | domain ha-956449-m04 has defined MAC address 52:54:00:05:0e:54 in network mk-ha-956449
	I0414 13:12:44.883354 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:0e:54", ip: ""} in network mk-ha-956449: {Iface:virbr1 ExpiryTime:2025-04-14 14:10:15 +0000 UTC Type:0 Mac:52:54:00:05:0e:54 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-956449-m04 Clientid:01:52:54:00:05:0e:54}
	I0414 13:12:44.883376 2204581 main.go:141] libmachine: (ha-956449-m04) DBG | domain ha-956449-m04 has defined IP address 192.168.39.3 and MAC address 52:54:00:05:0e:54 in network mk-ha-956449
	I0414 13:12:44.883583 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetSSHPort
	I0414 13:12:44.883742 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetSSHKeyPath
	I0414 13:12:44.883867 2204581 main.go:141] libmachine: (ha-956449-m04) Calling .GetSSHUsername
	I0414 13:12:44.884018 2204581 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/ha-956449-m04/id_rsa Username:docker}
	I0414 13:12:44.964997 2204581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:12:44.983210 2204581 status.go:176] ha-956449-m04 status: &{Name:ha-956449-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 node start m02 -v=7 --alsologtostderr
E0414 13:13:11.847597 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-956449 node start m02 -v=7 --alsologtostderr: (46.239620822s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (449.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-956449 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-956449 -v=7 --alsologtostderr
E0414 13:15:27.986105 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:15:55.689936 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:17:25.774743 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-956449 -v=7 --alsologtostderr: (4m34.234649802s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956449 --wait=true -v=7 --alsologtostderr
E0414 13:18:48.846485 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:20:27.986988 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-956449 --wait=true -v=7 --alsologtostderr: (2m55.214951189s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-956449
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (449.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-956449 node delete m03 -v=7 --alsologtostderr: (17.727164947s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 stop -v=7 --alsologtostderr
E0414 13:22:25.774392 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:25:27.986264 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-956449 stop -v=7 --alsologtostderr: (4m32.860891399s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr: exit status 7 (116.889038ms)

                                                
                                                
-- stdout --
	ha-956449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956449-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956449-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:25:55.279120 2209357 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:25:55.279367 2209357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:25:55.279375 2209357 out.go:358] Setting ErrFile to fd 2...
	I0414 13:25:55.279378 2209357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:25:55.279550 2209357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:25:55.279699 2209357 out.go:352] Setting JSON to false
	I0414 13:25:55.279741 2209357 mustload.go:65] Loading cluster: ha-956449
	I0414 13:25:55.279786 2209357 notify.go:220] Checking for updates...
	I0414 13:25:55.280126 2209357 config.go:182] Loaded profile config "ha-956449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:25:55.280159 2209357 status.go:174] checking status of ha-956449 ...
	I0414 13:25:55.280579 2209357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:25:55.280629 2209357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:25:55.303670 2209357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I0414 13:25:55.304229 2209357 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:25:55.304917 2209357 main.go:141] libmachine: Using API Version  1
	I0414 13:25:55.304959 2209357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:25:55.305358 2209357 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:25:55.305612 2209357 main.go:141] libmachine: (ha-956449) Calling .GetState
	I0414 13:25:55.307233 2209357 status.go:371] ha-956449 host status = "Stopped" (err=<nil>)
	I0414 13:25:55.307248 2209357 status.go:384] host is not running, skipping remaining checks
	I0414 13:25:55.307254 2209357 status.go:176] ha-956449 status: &{Name:ha-956449 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:25:55.307287 2209357 status.go:174] checking status of ha-956449-m02 ...
	I0414 13:25:55.307612 2209357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:25:55.307690 2209357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:25:55.323297 2209357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I0414 13:25:55.323732 2209357 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:25:55.324160 2209357 main.go:141] libmachine: Using API Version  1
	I0414 13:25:55.324190 2209357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:25:55.324521 2209357 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:25:55.324738 2209357 main.go:141] libmachine: (ha-956449-m02) Calling .GetState
	I0414 13:25:55.326372 2209357 status.go:371] ha-956449-m02 host status = "Stopped" (err=<nil>)
	I0414 13:25:55.326386 2209357 status.go:384] host is not running, skipping remaining checks
	I0414 13:25:55.326392 2209357 status.go:176] ha-956449-m02 status: &{Name:ha-956449-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:25:55.326407 2209357 status.go:174] checking status of ha-956449-m04 ...
	I0414 13:25:55.326764 2209357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:25:55.326814 2209357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:25:55.342621 2209357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0414 13:25:55.343081 2209357 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:25:55.343587 2209357 main.go:141] libmachine: Using API Version  1
	I0414 13:25:55.343607 2209357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:25:55.343976 2209357 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:25:55.344155 2209357 main.go:141] libmachine: (ha-956449-m04) Calling .GetState
	I0414 13:25:55.345679 2209357 status.go:371] ha-956449-m04 host status = "Stopped" (err=<nil>)
	I0414 13:25:55.345694 2209357 status.go:384] host is not running, skipping remaining checks
	I0414 13:25:55.345700 2209357 status.go:176] ha-956449-m04 status: &{Name:ha-956449-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956449 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 13:26:51.052431 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:27:25.774581 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-956449 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.472071655s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-956449 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-956449 --control-plane -v=7 --alsologtostderr: (1m21.712793695s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-956449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-054991 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-054991 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.576257899s)
--- PASS: TestJSONOutput/start/Command (58.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-054991 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-054991 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-054991 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-054991 --output=json --user=testUser: (7.363423897s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-869429 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-869429 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.695349ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a779616-7988-4940-aa6c-559dfbdb1d02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-869429] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"031e7ccd-61ee-4e6c-9bf9-ce489a44aa54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20623"}}
	{"specversion":"1.0","id":"8acf3626-4397-457d-a173-39478c804b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"baa6aec3-5fc4-49fa-9a47-0460ee811d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig"}}
	{"specversion":"1.0","id":"62119b27-779e-47e0-8401-459dded920fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube"}}
	{"specversion":"1.0","id":"d4186630-4d0d-4dfc-8f57-879d38341f0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cd51ae7f-7dab-4316-8398-273775255ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"490b5770-6a25-4f77-8404-363047ff466c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-869429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-869429
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-104065 --driver=kvm2  --container-runtime=crio
E0414 13:30:27.987859 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-104065 --driver=kvm2  --container-runtime=crio: (42.184317709s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-117229 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-117229 --driver=kvm2  --container-runtime=crio: (40.617621157s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-104065
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-117229
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-117229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-117229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-117229: (1.038839085s)
helpers_test.go:175: Cleaning up "first-104065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-104065
--- PASS: TestMinikubeProfile (85.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-307642 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-307642 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.959391475s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-307642 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-307642 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-322776 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 13:32:25.778573 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-322776 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.403804655s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-307642 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-322776
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-322776: (2.280315691s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.56s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-322776
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-322776: (22.56266636s)
--- PASS: TestMountStart/serial/RestartStopped (23.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-322776 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-190122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-190122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.766220654s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-190122 -- rollout status deployment/busybox: (8.94400242s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-74lxx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-7q9j2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-74lxx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-7q9j2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-74lxx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-7q9j2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-74lxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-74lxx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-7q9j2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-190122 -- exec busybox-58667487b6-7q9j2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-190122 -v 3 --alsologtostderr
E0414 13:35:27.986700 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:35:28.847964 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-190122 -v 3 --alsologtostderr: (54.094822374s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-190122 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp testdata/cp-test.txt multinode-190122:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096707965/001/cp-test_multinode-190122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122:/home/docker/cp-test.txt multinode-190122-m02:/home/docker/cp-test_multinode-190122_multinode-190122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test_multinode-190122_multinode-190122-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122:/home/docker/cp-test.txt multinode-190122-m03:/home/docker/cp-test_multinode-190122_multinode-190122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test_multinode-190122_multinode-190122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp testdata/cp-test.txt multinode-190122-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096707965/001/cp-test_multinode-190122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m02:/home/docker/cp-test.txt multinode-190122:/home/docker/cp-test_multinode-190122-m02_multinode-190122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test_multinode-190122-m02_multinode-190122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m02:/home/docker/cp-test.txt multinode-190122-m03:/home/docker/cp-test_multinode-190122-m02_multinode-190122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test_multinode-190122-m02_multinode-190122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp testdata/cp-test.txt multinode-190122-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096707965/001/cp-test_multinode-190122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m03:/home/docker/cp-test.txt multinode-190122:/home/docker/cp-test_multinode-190122-m03_multinode-190122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122 "sudo cat /home/docker/cp-test_multinode-190122-m03_multinode-190122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 cp multinode-190122-m03:/home/docker/cp-test.txt multinode-190122-m02:/home/docker/cp-test_multinode-190122-m03_multinode-190122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 ssh -n multinode-190122-m02 "sudo cat /home/docker/cp-test_multinode-190122-m03_multinode-190122-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-190122 node stop m03: (1.490676928s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-190122 status: exit status 7 (449.776485ms)

                                                
                                                
-- stdout --
	multinode-190122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-190122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-190122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr: exit status 7 (443.922737ms)

                                                
                                                
-- stdout --
	multinode-190122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-190122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-190122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:36:21.031128 2217064 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:36:21.031236 2217064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:21.031244 2217064 out.go:358] Setting ErrFile to fd 2...
	I0414 13:36:21.031248 2217064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:21.031442 2217064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:36:21.031613 2217064 out.go:352] Setting JSON to false
	I0414 13:36:21.031645 2217064 mustload.go:65] Loading cluster: multinode-190122
	I0414 13:36:21.031753 2217064 notify.go:220] Checking for updates...
	I0414 13:36:21.032073 2217064 config.go:182] Loaded profile config "multinode-190122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:21.032097 2217064 status.go:174] checking status of multinode-190122 ...
	I0414 13:36:21.032582 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.032645 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.049448 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0414 13:36:21.049902 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.050489 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.050523 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.050938 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.051150 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetState
	I0414 13:36:21.052844 2217064 status.go:371] multinode-190122 host status = "Running" (err=<nil>)
	I0414 13:36:21.052866 2217064 host.go:66] Checking if "multinode-190122" exists ...
	I0414 13:36:21.053159 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.053200 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.069503 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0414 13:36:21.069936 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.070432 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.070456 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.070868 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.071172 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetIP
	I0414 13:36:21.073979 2217064 main.go:141] libmachine: (multinode-190122) DBG | domain multinode-190122 has defined MAC address 52:54:00:41:c9:89 in network mk-multinode-190122
	I0414 13:36:21.074410 2217064 main.go:141] libmachine: (multinode-190122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c9:89", ip: ""} in network mk-multinode-190122: {Iface:virbr1 ExpiryTime:2025-04-14 14:33:22 +0000 UTC Type:0 Mac:52:54:00:41:c9:89 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-190122 Clientid:01:52:54:00:41:c9:89}
	I0414 13:36:21.074441 2217064 main.go:141] libmachine: (multinode-190122) DBG | domain multinode-190122 has defined IP address 192.168.39.213 and MAC address 52:54:00:41:c9:89 in network mk-multinode-190122
	I0414 13:36:21.074587 2217064 host.go:66] Checking if "multinode-190122" exists ...
	I0414 13:36:21.074898 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.074949 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.091218 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0414 13:36:21.091708 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.092299 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.092331 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.092694 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.092962 2217064 main.go:141] libmachine: (multinode-190122) Calling .DriverName
	I0414 13:36:21.093227 2217064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:36:21.093262 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetSSHHostname
	I0414 13:36:21.096066 2217064 main.go:141] libmachine: (multinode-190122) DBG | domain multinode-190122 has defined MAC address 52:54:00:41:c9:89 in network mk-multinode-190122
	I0414 13:36:21.096518 2217064 main.go:141] libmachine: (multinode-190122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c9:89", ip: ""} in network mk-multinode-190122: {Iface:virbr1 ExpiryTime:2025-04-14 14:33:22 +0000 UTC Type:0 Mac:52:54:00:41:c9:89 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-190122 Clientid:01:52:54:00:41:c9:89}
	I0414 13:36:21.096556 2217064 main.go:141] libmachine: (multinode-190122) DBG | domain multinode-190122 has defined IP address 192.168.39.213 and MAC address 52:54:00:41:c9:89 in network mk-multinode-190122
	I0414 13:36:21.096654 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetSSHPort
	I0414 13:36:21.096841 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetSSHKeyPath
	I0414 13:36:21.097004 2217064 main.go:141] libmachine: (multinode-190122) Calling .GetSSHUsername
	I0414 13:36:21.097157 2217064 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/multinode-190122/id_rsa Username:docker}
	I0414 13:36:21.184149 2217064 ssh_runner.go:195] Run: systemctl --version
	I0414 13:36:21.194609 2217064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:36:21.209748 2217064 kubeconfig.go:125] found "multinode-190122" server: "https://192.168.39.213:8443"
	I0414 13:36:21.209792 2217064 api_server.go:166] Checking apiserver status ...
	I0414 13:36:21.209841 2217064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:36:21.224850 2217064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1089/cgroup
	W0414 13:36:21.235420 2217064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1089/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:36:21.235474 2217064 ssh_runner.go:195] Run: ls
	I0414 13:36:21.241183 2217064 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0414 13:36:21.245756 2217064 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0414 13:36:21.245779 2217064 status.go:463] multinode-190122 apiserver status = Running (err=<nil>)
	I0414 13:36:21.245790 2217064 status.go:176] multinode-190122 status: &{Name:multinode-190122 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:36:21.245812 2217064 status.go:174] checking status of multinode-190122-m02 ...
	I0414 13:36:21.246102 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.246139 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.262967 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0414 13:36:21.263496 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.264068 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.264094 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.264406 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.264592 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetState
	I0414 13:36:21.266098 2217064 status.go:371] multinode-190122-m02 host status = "Running" (err=<nil>)
	I0414 13:36:21.266119 2217064 host.go:66] Checking if "multinode-190122-m02" exists ...
	I0414 13:36:21.266515 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.266567 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.282197 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0414 13:36:21.282654 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.283122 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.283145 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.283514 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.283712 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetIP
	I0414 13:36:21.286416 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | domain multinode-190122-m02 has defined MAC address 52:54:00:de:e2:8e in network mk-multinode-190122
	I0414 13:36:21.286930 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e2:8e", ip: ""} in network mk-multinode-190122: {Iface:virbr1 ExpiryTime:2025-04-14 14:34:27 +0000 UTC Type:0 Mac:52:54:00:de:e2:8e Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-190122-m02 Clientid:01:52:54:00:de:e2:8e}
	I0414 13:36:21.286965 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | domain multinode-190122-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:de:e2:8e in network mk-multinode-190122
	I0414 13:36:21.287153 2217064 host.go:66] Checking if "multinode-190122-m02" exists ...
	I0414 13:36:21.287493 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.287536 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.303061 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0414 13:36:21.303604 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.304040 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.304061 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.304445 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.304648 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .DriverName
	I0414 13:36:21.304878 2217064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:36:21.304901 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetSSHHostname
	I0414 13:36:21.307610 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | domain multinode-190122-m02 has defined MAC address 52:54:00:de:e2:8e in network mk-multinode-190122
	I0414 13:36:21.308014 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e2:8e", ip: ""} in network mk-multinode-190122: {Iface:virbr1 ExpiryTime:2025-04-14 14:34:27 +0000 UTC Type:0 Mac:52:54:00:de:e2:8e Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-190122-m02 Clientid:01:52:54:00:de:e2:8e}
	I0414 13:36:21.308048 2217064 main.go:141] libmachine: (multinode-190122-m02) DBG | domain multinode-190122-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:de:e2:8e in network mk-multinode-190122
	I0414 13:36:21.308220 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetSSHPort
	I0414 13:36:21.308396 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetSSHKeyPath
	I0414 13:36:21.308545 2217064 main.go:141] libmachine: (multinode-190122-m02) Calling .GetSSHUsername
	I0414 13:36:21.308687 2217064 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20623-2183077/.minikube/machines/multinode-190122-m02/id_rsa Username:docker}
	I0414 13:36:21.392079 2217064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:36:21.406317 2217064 status.go:176] multinode-190122-m02 status: &{Name:multinode-190122-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:36:21.406358 2217064 status.go:174] checking status of multinode-190122-m03 ...
	I0414 13:36:21.406670 2217064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:21.406726 2217064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:21.423003 2217064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0414 13:36:21.423571 2217064 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:21.424077 2217064 main.go:141] libmachine: Using API Version  1
	I0414 13:36:21.424098 2217064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:21.424429 2217064 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:21.424660 2217064 main.go:141] libmachine: (multinode-190122-m03) Calling .GetState
	I0414 13:36:21.426255 2217064 status.go:371] multinode-190122-m03 host status = "Stopped" (err=<nil>)
	I0414 13:36:21.426271 2217064 status.go:384] host is not running, skipping remaining checks
	I0414 13:36:21.426289 2217064 status.go:176] multinode-190122-m03 status: &{Name:multinode-190122-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-190122 node start m03 -v=7 --alsologtostderr: (39.65380035s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (343.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-190122
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-190122
E0414 13:37:25.781181 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-190122: (3m3.004939859s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-190122 --wait=true -v=8 --alsologtostderr
E0414 13:40:27.986570 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:42:25.775048 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-190122 --wait=true -v=8 --alsologtostderr: (2m40.851700242s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-190122
--- PASS: TestMultiNode/serial/RestartKeepsNodes (343.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-190122 node delete m03: (2.148867731s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 stop
E0414 13:43:31.056044 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:45:27.987269 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-190122 stop: (3m1.69754158s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-190122 status: exit status 7 (88.663215ms)

                                                
                                                
-- stdout --
	multinode-190122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-190122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr: exit status 7 (91.258185ms)

                                                
                                                
-- stdout --
	multinode-190122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-190122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:45:50.234173 2220133 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:45:50.234461 2220133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:50.234473 2220133 out.go:358] Setting ErrFile to fd 2...
	I0414 13:45:50.234476 2220133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:50.234693 2220133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 13:45:50.234952 2220133 out.go:352] Setting JSON to false
	I0414 13:45:50.234992 2220133 mustload.go:65] Loading cluster: multinode-190122
	I0414 13:45:50.235101 2220133 notify.go:220] Checking for updates...
	I0414 13:45:50.235491 2220133 config.go:182] Loaded profile config "multinode-190122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:45:50.235525 2220133 status.go:174] checking status of multinode-190122 ...
	I0414 13:45:50.235991 2220133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:45:50.236045 2220133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:45:50.256276 2220133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0414 13:45:50.256780 2220133 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:45:50.257404 2220133 main.go:141] libmachine: Using API Version  1
	I0414 13:45:50.257431 2220133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:45:50.257867 2220133 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:45:50.258097 2220133 main.go:141] libmachine: (multinode-190122) Calling .GetState
	I0414 13:45:50.259865 2220133 status.go:371] multinode-190122 host status = "Stopped" (err=<nil>)
	I0414 13:45:50.259881 2220133 status.go:384] host is not running, skipping remaining checks
	I0414 13:45:50.259888 2220133 status.go:176] multinode-190122 status: &{Name:multinode-190122 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:45:50.259926 2220133 status.go:174] checking status of multinode-190122-m02 ...
	I0414 13:45:50.260254 2220133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:45:50.260324 2220133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:45:50.275087 2220133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0414 13:45:50.275550 2220133 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:45:50.276015 2220133 main.go:141] libmachine: Using API Version  1
	I0414 13:45:50.276031 2220133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:45:50.276374 2220133 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:45:50.276534 2220133 main.go:141] libmachine: (multinode-190122-m02) Calling .GetState
	I0414 13:45:50.278087 2220133 status.go:371] multinode-190122-m02 host status = "Stopped" (err=<nil>)
	I0414 13:45:50.278102 2220133 status.go:384] host is not running, skipping remaining checks
	I0414 13:45:50.278110 2220133 status.go:176] multinode-190122-m02 status: &{Name:multinode-190122-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (157.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-190122 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 13:47:25.775332 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-190122 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m37.226502955s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-190122 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (157.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-190122
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-190122-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-190122-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.372971ms)

                                                
                                                
-- stdout --
	* [multinode-190122-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-190122-m02' is duplicated with machine name 'multinode-190122-m02' in profile 'multinode-190122'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-190122-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-190122-m03 --driver=kvm2  --container-runtime=crio: (46.064426158s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-190122
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-190122: exit status 80 (234.051548ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-190122 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-190122-m03 already exists in multinode-190122-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_9.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-190122-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-190122-m03: (1.019199476s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.43s)

                                                
                                    
x
+
TestScheduledStopUnix (115.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-142898 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-142898 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.02901496s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142898 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-142898 -n scheduled-stop-142898
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 13:53:30.555901 2190400 retry.go:31] will retry after 146.611µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.557085 2190400 retry.go:31] will retry after 122.32µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.558258 2190400 retry.go:31] will retry after 269.453µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.559407 2190400 retry.go:31] will retry after 438.544µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.560558 2190400 retry.go:31] will retry after 495.298µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.561738 2190400 retry.go:31] will retry after 514.714µs: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.562910 2190400 retry.go:31] will retry after 1.210673ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.565122 2190400 retry.go:31] will retry after 1.922545ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.567354 2190400 retry.go:31] will retry after 3.237604ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.571592 2190400 retry.go:31] will retry after 2.432527ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.574798 2190400 retry.go:31] will retry after 3.477564ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.579008 2190400 retry.go:31] will retry after 7.658583ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.587243 2190400 retry.go:31] will retry after 11.273141ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.599511 2190400 retry.go:31] will retry after 24.080936ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
I0414 13:53:30.623733 2190400 retry.go:31] will retry after 37.573691ms: open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/scheduled-stop-142898/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142898 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142898 -n scheduled-stop-142898
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-142898
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-142898
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-142898: exit status 7 (68.849651ms)

                                                
                                                
-- stdout --
	scheduled-stop-142898
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142898 -n scheduled-stop-142898
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142898 -n scheduled-stop-142898: exit status 7 (67.607401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-142898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-142898
--- PASS: TestScheduledStopUnix (115.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (161.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1178116129 start -p running-upgrade-742924 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1178116129 start -p running-upgrade-742924 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m8.573642653s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-742924 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-742924 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.464255791s)
helpers_test.go:175: Cleaning up "running-upgrade-742924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-742924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-742924: (1.316739308s)
--- PASS: TestRunningBinaryUpgrade (161.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (87.174272ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-489001] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (120.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-489001 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-489001 --driver=kvm2  --container-runtime=crio: (2m0.205246535s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-489001 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (120.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (174.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3135109915 start -p stopped-upgrade-078978 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3135109915 start -p stopped-upgrade-078978 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m19.262001436s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3135109915 -p stopped-upgrade-078978 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3135109915 -p stopped-upgrade-078978 stop: (12.145827109s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-078978 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-078978 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.551273548s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (174.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --driver=kvm2  --container-runtime=crio: (17.162131486s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-489001 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-489001 status -o json: exit status 2 (268.413444ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-489001","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-489001
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 13:57:25.781049 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-489001 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.110068589s)
--- PASS: TestNoKubernetes/serial/Start (46.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-489001 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-489001 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.599014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.475739463s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.627886927s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-489001
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-489001: (1.505909419s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-489001 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-489001 --driver=kvm2  --container-runtime=crio: (23.094106606s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-489001 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-489001 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.899908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-078978
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-078978: (1.039871206s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (91.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-648153 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-648153 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m31.18271813s)
--- PASS: TestPause/serial/Start (91.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-793608 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-793608 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.078076ms)

                                                
                                                
-- stdout --
	* [false-793608] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20623
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:00:40.131142 2229546 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:00:40.131460 2229546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:00:40.131473 2229546 out.go:358] Setting ErrFile to fd 2...
	I0414 14:00:40.131480 2229546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:00:40.131673 2229546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20623-2183077/.minikube/bin
	I0414 14:00:40.132353 2229546 out.go:352] Setting JSON to false
	I0414 14:00:40.133568 2229546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":168179,"bootTime":1744471061,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:00:40.133685 2229546 start.go:139] virtualization: kvm guest
	I0414 14:00:40.135759 2229546 out.go:177] * [false-793608] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:00:40.136937 2229546 out.go:177]   - MINIKUBE_LOCATION=20623
	I0414 14:00:40.136953 2229546 notify.go:220] Checking for updates...
	I0414 14:00:40.139132 2229546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:00:40.140283 2229546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20623-2183077/kubeconfig
	I0414 14:00:40.141411 2229546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20623-2183077/.minikube
	I0414 14:00:40.142581 2229546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:00:40.143707 2229546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:00:40.145298 2229546 config.go:182] Loaded profile config "kubernetes-upgrade-461086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 14:00:40.145404 2229546 config.go:182] Loaded profile config "pause-648153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:00:40.145481 2229546 config.go:182] Loaded profile config "running-upgrade-742924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 14:00:40.145626 2229546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:00:40.184082 2229546 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:00:40.185405 2229546 start.go:297] selected driver: kvm2
	I0414 14:00:40.185425 2229546 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:00:40.185438 2229546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:00:40.187409 2229546 out.go:201] 
	W0414 14:00:40.188550 2229546 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 14:00:40.189513 2229546 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-793608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.188:8443
name: pause-648153
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.101:8443
name: running-upgrade-742924
contexts:
- context:
cluster: pause-648153
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-648153
name: pause-648153
- context:
cluster: running-upgrade-742924
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-742924
name: running-upgrade-742924
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-648153
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.key
- name: running-upgrade-742924
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-793608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793608"

                                                
                                                
----------------------- debugLogs end: false-793608 [took: 2.845904597s] --------------------------------
helpers_test.go:175: Cleaning up "false-793608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-793608
--- PASS: TestNetworkPlugins/group/false (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (130.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (2m10.95362592s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (130.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-242761 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-242761 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m37.013041231s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (15.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-242761 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0fa52b2-d4b2-47fa-a251-8a102f00fdd6] Pending
helpers_test.go:344: "busybox" [d0fa52b2-d4b2-47fa-a251-8a102f00fdd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0fa52b2-d4b2-47fa-a251-8a102f00fdd6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 15.005315453s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-242761 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (15.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496809 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [37b7cb80-7e39-47c2-8b7d-07a18a7ed6eb] Pending
helpers_test.go:344: "busybox" [37b7cb80-7e39-47c2-8b7d-07a18a7ed6eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [37b7cb80-7e39-47c2-8b7d-07a18a7ed6eb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.006660043s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-242761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-242761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026274212s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-242761 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-242761 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-242761 --alsologtostderr -v=3: (1m31.076736604s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-460312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-460312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m0.370141606s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-496809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-496809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-496809 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-496809 --alsologtostderr -v=3: (1m31.304139765s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-460312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af579655-5d64-4bb2-b31a-49d74d534a05] Pending
helpers_test.go:344: "busybox" [af579655-5d64-4bb2-b31a-49d74d534a05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af579655-5d64-4bb2-b31a-49d74d534a05] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 15.003482698s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-460312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-460312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-460312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-460312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-460312 --alsologtostderr -v=3: (1m31.116420437s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-242761 -n embed-certs-242761
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-242761 -n embed-certs-242761: exit status 7 (64.701268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-242761 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (329.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-242761 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 14:05:27.986650 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-242761 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m28.902226254s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-242761 -n embed-certs-242761
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (329.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496809 -n no-preload-496809
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496809 -n no-preload-496809: exit status 7 (76.420392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-496809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (364.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (6m4.247598016s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496809 -n no-preload-496809
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (364.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312: exit status 7 (73.473688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-460312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-460312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 14:07:25.775320 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-460312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m35.608720171s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-954411 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-954411 --alsologtostderr -v=3: (2.296474461s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-954411 -n old-k8s-version-954411: exit status 7 (73.041833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-954411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b6vmn" [5c061c9f-1c69-4ad6-875a-38a0adba0e87] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b6vmn" [5c061c9f-1c69-4ad6-875a-38a0adba0e87] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003757241s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b6vmn" [5c061c9f-1c69-4ad6-875a-38a0adba0e87] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004760709s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-242761 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-242761 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-242761 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-242761 -n embed-certs-242761
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-242761 -n embed-certs-242761: exit status 2 (241.40661ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-242761 -n embed-certs-242761
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-242761 -n embed-certs-242761: exit status 2 (248.348924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-242761 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-242761 -n embed-certs-242761
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-242761 -n embed-certs-242761
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-024528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-024528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (46.679476271s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vtn8" [0998d6fd-e78e-471b-90d0-d17f1584c774] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vtn8" [0998d6fd-e78e-471b-90d0-d17f1584c774] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00576174s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vtn8" [0998d6fd-e78e-471b-90d0-d17f1584c774] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005060982s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-496809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-496809 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-496809 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496809 -n no-preload-496809
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496809 -n no-preload-496809: exit status 2 (277.453071ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496809 -n no-preload-496809
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496809 -n no-preload-496809: exit status 2 (284.843744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-496809 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496809 -n no-preload-496809
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496809 -n no-preload-496809
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m29.420853733s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-024528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-024528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.478682133s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-024528 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-024528 --alsologtostderr -v=3: (10.74025051s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024528 -n newest-cni-024528
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024528 -n newest-cni-024528: exit status 7 (95.240205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-024528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-024528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-024528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (53.652969224s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024528 -n newest-cni-024528
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-h5ntv" [860166a5-840c-4d06-991d-54333f8f47cd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-h5ntv" [860166a5-840c-4d06-991d-54333f8f47cd] Running
E0414 14:12:25.775011 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/addons-102056/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003629598s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-h5ntv" [860166a5-840c-4d06-991d-54333f8f47cd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004654462s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-460312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-460312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-460312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312: exit status 2 (311.604241ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312: exit status 2 (283.478821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-460312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460312 -n default-k8s-diff-port-460312
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m15.02864044s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-024528 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-024528 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024528 -n newest-cni-024528
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024528 -n newest-cni-024528: exit status 2 (240.171068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024528 -n newest-cni-024528
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024528 -n newest-cni-024528: exit status 2 (246.487788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-024528 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024528 -n newest-cni-024528
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024528 -n newest-cni-024528
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m29.266177821s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-793608 "pgrep -a kubelet"
I0414 14:13:21.989544 2190400 config.go:182] Loaded profile config "auto-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p6ll2" [bc96a67d-312e-4a36-9bb1-6362917dfa4b] Pending
helpers_test.go:344: "netcat-5d86dc444-p6ll2" [bc96a67d-312e-4a36-9bb1-6362917dfa4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-p6ll2" [bc96a67d-312e-4a36-9bb1-6362917dfa4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005227917s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.801137723s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qsrd5" [87779228-6e2e-47bf-9fc7-c84ff0e6e52d] Running
E0414 14:13:51.394937 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:13:56.517247 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004286915s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-793608 "pgrep -a kubelet"
I0414 14:13:57.237234 2190400 config.go:182] Loaded profile config "kindnet-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7l85w" [8cf96414-52a8-48e3-8694-f2db56444320] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7l85w" [8cf96414-52a8-48e3-8694-f2db56444320] Running
E0414 14:14:06.759076 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004552954s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0414 14:14:27.240963 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (58.89661994s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7hcql" [40921a9d-a981-4607-ba50-0e6b55742abd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003403421s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-793608 "pgrep -a kubelet"
I0414 14:14:40.524769 2190400 config.go:182] Loaded profile config "calico-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6t5p7" [5bf54f01-472a-45a7-9b29-2f23ff549aaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6t5p7" [5bf54f01-472a-45a7-9b29-2f23ff549aaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004713125s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-793608 "pgrep -a kubelet"
I0414 14:15:04.711785 2190400 config.go:182] Loaded profile config "custom-flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dlbvg" [cce22dc5-ddba-4f0f-a3c6-ed93ae940adf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:15:08.203803 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/no-preload-496809/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-dlbvg" [cce22dc5-ddba-4f0f-a3c6-ed93ae940adf] Running
E0414 14:15:12.944918 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005342192s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m16.689983469s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-793608 "pgrep -a kubelet"
I0414 14:15:26.209492 2190400 config.go:182] Loaded profile config "bridge-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w4wxk" [cf6422f7-8eb2-4bd6-85a6-ec676384af46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:15:27.986330 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/functional-891289/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-w4wxk" [cf6422f7-8eb2-4bd6-85a6-ec676384af46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004514577s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0414 14:15:33.427140 2190400 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/default-k8s-diff-port-460312/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-793608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m31.746117912s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dpdz6" [66082424-f4eb-418b-b2f4-d14cff676b0c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003697798s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-793608 "pgrep -a kubelet"
I0414 14:16:32.502576 2190400 config.go:182] Loaded profile config "flannel-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dbdn6" [b6137205-eea3-409a-b913-e23fa9440b7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dbdn6" [b6137205-eea3-409a-b913-e23fa9440b7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003154332s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-793608 "pgrep -a kubelet"
I0414 14:17:03.617488 2190400 config.go:182] Loaded profile config "enable-default-cni-793608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-793608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-drt4s" [1fd84a58-345b-4173-80ab-8dcb2cddd042] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-drt4s" [1fd84a58-345b-4173-80ab-8dcb2cddd042] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003023785s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-793608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-793608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
271 TestStartStop/group/disable-driver-mounts 0.15
277 TestNetworkPlugins/group/kubenet 2.99
285 TestNetworkPlugins/group/cilium 3.37
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-102056 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-572728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-572728
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-793608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.188:8443
name: pause-648153
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.101:8443
name: running-upgrade-742924
contexts:
- context:
cluster: pause-648153
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-648153
name: pause-648153
- context:
cluster: running-upgrade-742924
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-742924
name: running-upgrade-742924
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-648153
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.key
- name: running-upgrade-742924
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-793608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793608"

                                                
                                                
----------------------- debugLogs end: kubenet-793608 [took: 2.833922946s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-793608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-793608
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-793608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-793608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.188:8443
name: pause-648153
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20623-2183077/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.101:8443
name: running-upgrade-742924
contexts:
- context:
cluster: pause-648153
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-648153
name: pause-648153
- context:
cluster: running-upgrade-742924
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-742924
name: running-upgrade-742924
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-648153
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/pause-648153/client.key
- name: running-upgrade-742924
user:
client-certificate: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.crt
client-key: /home/jenkins/minikube-integration/20623-2183077/.minikube/profiles/running-upgrade-742924/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-793608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-793608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793608"

                                                
                                                
----------------------- debugLogs end: cilium-793608 [took: 3.222291949s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-793608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-793608
--- SKIP: TestNetworkPlugins/group/cilium (3.37s)

                                                
                                    
Copied to clipboard